var/home/core/zuul-output/0000755000175000017500000000000015136057153014533 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136062513015473 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000243204115136062337020262 0ustar corecoredxikubelet.log_o[;r)Br'o b-n(!9%CMc;b[>Ǧ( k%r8Iٸ &mow|_v-VgY񎷷?.y7O?o_vpZjк%K^g󯴙/q;m^|^ǿqڝo>߬>}ʳghy>=*EbqXgnxwg=K7Tb+b#gu]vGR)$DD D~u;\iX\|U. $ไsύ<83Jp ώI8&xz|;F|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kzn!#Šcv cXk?`G`Ëb)E,s)ɀ;$#LcdHMB)K.(^$0^@hH9%!20Jm>*:O񨡺ePӋ&6jGnL!?lJJYq=Wo/"IyQ4\:z| 6h6dQX0>HTG5Q/uxUe 1ė/5άRIo4T0ٔfH_W ONGWo(C U ?}aJ+do&?>Y;ufޕ+D`7Pa]Xj0ćNbYe獸]fNdƭwywOw0rjɻ,]LF0);I$>ga5"f[B[fhToɾgZ)~5ɑUIU"$`SFKa"j[Hp'{fȼ-vE,4IRkL!~kn0ߐNPJ|U ]]=UD m|O-%UNnOA~HXwhO@GڷMVw dOox^-:}KA8玛7C;XHK:lL4Aْ .zqHP"P.dTrcD Yjz_aLm.x'~)')SĔv}S%xhRe)a@r N0RQ.FkyZ< O)VCRQrC|}nv?R?Q~}9 ;]-Oľ9v%T&hoP~(*טj-dߛ_Q?[kLd. yK>"dg{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjOh}nL;R:7A}Ss8 N'ԗڲ7J9@ kV%'DG.b.~,%6~;GqE,[pJ82D:BCtka7v Ө⸇~AE6xdv?Dr/0;|!B`0p1y6PM3rr1TZ')*R ,k4΢2KkBxjWNK0[EΰPaySw)} hP(d#iI@YUXPKL93LVY~,bW;W8QufiŒSq3<uqMQhiae̱F+MY~c_6_xWf`rVA,f=A}h&V.}yWZ,).Y͆/h7n%PAU?/,z_jx܍>>o낿kg{9𚃚p9wo#z5A׋yTJ$KOL-aP+;;%+_6'Sr|@2nQ{aK|bjܒ^(מO80$QxBcX; yCùXz!bm5uA߉X})0/>nNNXYt\oP@gV ]cӰJ:q';E=-dZB4']a.QO:#'6RE'E3 */HAYk%C6Θ%|5u=kkN2{#FEc* A>{avdt)8|mg嶚TN7,TEVOy4%-Lq6d@CYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tu7}opY.W]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Sr>Ӽ]\ hSQƗL {g6R/wD_tՄ/F+HP'AE+ J j"b~+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrgU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjl\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5 nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ Y+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅKgGnGHRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?O'abk5"'\}6M~VlʵTMF R k%S5<`d+0o,AրcbvJ2O`gA2Ȏp@ lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&Ack vz(vb$^Nyo$p[DtUCE9s".zɪ) ӓT)D:fci[*`cc&VhfFp佬)/Wdځ+ uR<$}Kr'ݔTW$md1"#mC_@:m P>DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { wml"Ms>\΋"?|NKfֱn !ڄ`[nUgu$ B6 [^7 |Xpn1]nr CC5`F`J `rKJ;?28¢E WiBhFa[|ݩSRO3]J-҅31,jl3Y QuH vΎ]n_2a62;VI/ɮ|Lu>'$0&*m.)HzzBvU0h} -_.7^nya+Cs 6K!x^' ^7 l 2Jj.S֔(*CjaS:vp/N6I*x8"EȿQa[qVM/)fpOj4r!:V_IG^nILVG#A7jF};qPU嗈M9VS;a+Ӧ8E8zmMs*7NM~@6 ' 8jp*:'SOANa0rӍ?DT%l)gvN}JT(Ȋqm|dc+lQai,|Dߟ|, d#EjZܴv]pEO7}&gbXԈedKX :+Z|p8"81,w:$TiVD7ֶ]cga@>\X=4OZSܿ* %xccDa.E h :R.qɱMu$ơI8>^V Y. ,BLq~z&0o- ,BLqfx9y:9244ANb n\"X>Y`bb*h%)(*_Gra^ sh6"BzƾH( ."e)B QlKlXt҈t9՚$ضz]'.!-r"1MCĦʸ"66pE{ =CNc\ESD[T4azry !5yY~ :3;Y[Iȧ q:i Ǟ/"8Wxç,vܰtX-LE7 |-D`JLw9|fb>4Nu ߏ3ap5k_JA+A.A~ C~`[KaQ-Ģn9ѧf q:cT >to^ X]j?-ȇlCf0hM`~ ó}0W@o  K[{d+`ze"l |d;L2k%x90ݙ^Oe ]nHfS+.4<#/5߁ݛǪ0q,7FeV/!; 瓠 Li% z}ɯww"O-]J`sdN$@"J`Y13K/9`VTElsX|D^c%֯T][$m;ԝ!,Z5f`XFzȁ=nrSA8; P=uY}r/27OUa%~0;үM3Tu ȩ*'3IC~LG,?.?C3tBYpm_g.~>3ʄ55[c&-Wgy_jVo,?s w*n\7[cpMY<~/"˘oV܉T6nn \_ߋV_}Z=k-nn sn.*upw pX\_ U-C_wS!|q?E-S_w$-#9?mh{R 4ѭm_9p -h2 dֲ 1"j {]]Nk"䁖%5'32hDz O\!f3KX0kIKq"H~%.b@:Oec6^:V8FDza5H`:&Q5 ^hI8nʁu EA~V O8Z-mYO!tO֠υ9G`6qmJc,Qh: ݢKNw2taC0Z' O > f-`:F_Ѫ2)sCj1THɩhS-^p b~?.>, `0!E%ҏ:H =VՑӄ| Ć.lL t1]}r^nʂI-|i*'yW='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6Uǖ xs%P ع omWl҈sApX!^ Ɩgv{Xn|$̇d`>1Ljn떚F+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~@9{bOe@7ȱ^%u~-B竟} |23 Z.`oqD>t@N _7c$h3`lg\)[h+pHBr^J |r\8czEnv@qZbRT1e8V Scc6:$[|a.fpU`ZR֩bKgTlѩynۢ, "1LӰW&jDkM~# (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&O1^'}plM)0\n ή ?Cֲa9H] lX9^vCο -vd+OUgRy2Я\ B0!% #>bJPUck\Ul'F瘏Y4Ew`[x٘p,>9V"R1I>bJ` UL'5m1Ԥ:t6I >jz(:W֪Ƹ)!fꠗe[XLE4atGS1px#S]MF˦NJPYDX%ܠꡗhl}i9f?q>b-E'V"mNf""ŦK9kǍ-vU #`uVi<s)/=r=nlӗЩsdLyVIUI':4^6& t,O669Ȁ,EʿkڍfC58$5?DX 4q]ll9W@/zNaZf% >Ę_"+BLu>'Ɩ=xɮ[⠋X((6I#z)2S zp&m?e8 "(O+:Y EaSD]<^(]В|Ǚ8"oRs?]\McZ0ϕ!1hKS`h0O{!L-w]ln2&0Ǚ'0=.T4G7! H/ͺ|@lX)+{{^s1V63 ۗI"*al NJ`Q8B\pup6_3XqCXznL9:{o qcuו8`n{ave=}OR9~yL Z1=W8>É R$|L ]OfJl˪VVg:lDԒ͢Zu[kWۗw{{7st08`J0ꨴU1|z:9dX)z2!S:'q9| 76"Q;D*04Zٚ ?V¼r8/G:T6Fw/ɚ~h?lUc3MکEen壹n\殸,˛_uu.Jssu/*47U0)l?R_^Uon̝f-nnZTeuu nn/*0׷տ·sHH?Et _I`[>>0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?crO e9d ljB?K_z>p%'3JQK-͗R>KkΤOq,*I|0]Sj%|-Ԟ = Ʃ%>H&t;9`>$& nIdE Ͻq*nŘʰNҁV?m$i_/``{cd &)k^!뷊dٱݲ0V28eSutwqha#$i,e%lד-% f l$Wž*ILw}UEa#W((e%J7B(H8\\I~ )bԬ 7͝Yq{\~(ϣB6Id_M|ުLE <[K U)qEio5iAz\GMb_UᦙZ:7Aadvsl'}UYGW?b_".xýdz͘|0-z?]f\\/)`z?!vj_% 2>.)6.)[1:.)[aŜeF F V^Dzd9|Y\9[^Wiر7BfA/+1[*cuxM<'fܵ\/v"uZ3m@? }c0cȊVe=u7]}P;gy3oɗ{aY"zU#&`>=0vyߵI7_u۱,¾dYi =ßg0-tF̝(KOvK'8Yກ') ?ai'x/}d0޾?2MMH/ro1d$!ˏ .n?W!%?Rz79kKV-d\،|+3]L 7e,m0jĒf.dUՋq*.),ۈ,eYj̻RE=cy.}߷aaevĂۺiY@0b?0|k͓šT(c|[zA2 1YuQ]Gլfe5kG $X*Wl$ Z`xSAZinH7e}KJo!X4_@ $n/K xh㙥iZW(G?AlCea\=%`dNNߟ2ğ+û_+0t6wy4%"=эM}|s;r0MkrC 'ȨUVAY(D̒*_WEB5薞>wr x1G- e6f|'GI%5.\Uu٠[i; nKkdQ̳ODzS7yh8q  cx;oK+N9ةh&cZ{,$3OB&F-#48ޠ|/꫼<-7$hF4(PNk)ߜaEYkh!ur>~Wcw|zMmh(vIǒ3U'1݇Ij'>I h`vO,-y".ʲcr Giw7v93 ^ʇ%3|A[ j8dQdwTw91ax<]*yV8kGAs5oX]UrٱH!9f5AB]XC3NN9o{y־>DŽ+GuK ϝe,*rބi]$)[T2Rrt؜6b ]s`?9D6$dy-՛5D](>%|j|G/+7|W"%šWb0DhuD }|0Е:9~O}1hs,+o@;<`6'ЭGMRtS<#K7yƊj$($QBla O?X+74nﱬX>֖+JaKr_bw= Klu+,9h}J~y!'˧r^ixhsxp*0epNޞsSzG=\^zo%h!>xRQTy 8WޝҨ02an`<+h@9=u)XzvJ" c`MQ4,kD,d]UK ~Q'"+)M DK1`X`4HP݇".soʈ6mw'bm@b Pn<ŝEUMAB?`6%9'mG A gA.0F،F:a 1_߽D稢DrԒ$\k7FuFw3 ,nҼ4< xxA#QVY.K@iKIUAsLLk1Q6>AQ8< t}Rߢ}ΫKW,|Q"}Ө*1aTHކV6=5X%{QY3)~i,k!RRڟ}Y鰞q{ʕȶ]RvSϥ[)[?/9,*H3޳@m 5e,U"KO);j1gV1͇p-WEEq}MC ˜1QZ {jiwܧC"\Zf{+)G#k'[J KBWk!иCZlaBi 4}q,Y!糽Y {lwt>m=P2Zu!zM4G6TCD:ljĊA;j LkR&#.b$!ݫaX*yXږ%1tm-۲udJv[`&)V,d^;93e2w voU:v-k pYTFMɦ4aU+Vh([j)aIaھb*aEGK&\.0m7@N-YN\ۂRYE R9qEjj[ eokx5m #g2mDѕ&UU1`Z0UV_:b9OZTL&-Zo^QH<:ɖ;$oGi 9_h"OiF2$WwD6tW=ߦTK|W>RFsIQAʡnB:,`OjMkcl]Yu61q ,;*Ϻa*Wޛ$BUU ӎuET**׹sX"Yhʓ~BiG3nC궎*M5ULk*x{;jqTM_!~&u= /Kǰw/-)W`1tϵխ-)ͳ+'Ou.W (h ,J(ۊ|aSm< rU.H&\*1 Ad6DU9)d,Zkʄh\I cE T$ U@[bc=C#jjXXrRol4efpc'Kb"/o֌]W5e\Bpw7}m)QSmpaS8ڞ. 2O]5[+\>2==\D9Q=-˪5HC1bӵf@T"h0` THFS9O7t=snZZ)_Bs9ˣB[S]cX,U-鸯-˱vi g1аl'4ȏ3fayoGeEǸqzjv!0!K3ZZJr\uޡR[QJ! O_]s%52jlg)4iyl"-aٺd!k \ų- VB& @ߒF~Q>?`96]i\}Kz]ʖHL9iP3oIT˲k-+>~Ou <|vFWmGO.u1v'6&˳H CSj꯳ﳮ˴T[b izE+Dir,r A/}0\4I2DWmm]U>xmN⫶CÃUFXaܛqxփ}wi,Y VUW9BEK$m?tW9ba|Ѭ{]6E0G*ٕB0g s]LC"nMɳQMɑ),Kk:[Q ;4u=}%HEy[=Tݴ < 0|_UH;=@eWy-1iU#[ψxzWyH7WrچnkR "U:/$ Wm?}@|/WQ[iI"yn-$QT[NZK_rec"8fJ=kzw2þDlVZ7Cc 筃/crtݴGrW>b Fr`!tt>GT.=P>%7lC(z ]ÈS7ck#Q$9g[ |z`qGo Bׁ3!kEt?EWrpP r#?X\k,-mLgbȈ7uw^ue3mD握0 5GMRvo߆aXcʃܑtO,Xy6cMKżdM3545uQe qtQs?lA&{@¢cܺ,\ coKy&& hz!4(vD,8ڗgK$gh]閫8ghtۄaA"phxlB\"!wV ,$AoZAxͶ6>nXē{0w}b@ _qMМ8B!3&n9:?钓4j?q[Y+s~geDQ"Aݜd^$'ږdq )#*޶EA~3|͐ߌ|5pQ-vX/W-#u~:#{U*Ͽ>^s$w/F<eU(E"x!8;Vڤ"Z>uK{.DlkC>aM*TZ䇏?XO>>'kӫtȤݍՏc]:Ct V=;"q̑[ڶ792u7+sj x#^%`1;jd p S۠ms߈ڀoZntP ΏڣjM#q^ !nģui ul uE`DW]#b  ;#F$B}KH#*ލb`յ`pJ cݠq 05R:lڡC#)~ 2!yc%FP;ˏgNP7 2zokFo;]@I3[hGc cD7uLc^D&Dw P 8@_%X}G9ɧu~<%oڃW.z,,͎2}ё ,!|5w 7 nՀT>W3/porsX;ߝ#~w8n8g]PAΑ `_`@i ; ߍX8`AȻr(ڕ#a`XNؖm ʻ ek.0`&<֙8a@n, (3(a"Ga`RNC/- C"DDt* "Zp:~8N!u$i9*4Uf~2?t.)ߝbku !D;+nBoD@Kn::,H0_h<"EhdT|:㠜Cx=r]:SDBPm),`aDyB%;sw|Cs˸x04ZMZWiE9,dXboRJ۟}տۉԸ(Q{lbVwo[j0 @; CP4MnUeLu5Q8Ň+(H5R)-(cfS/R^/71sf38~_C.>Y fq:|u"׋r_kh3)J5Z'Js\jMjs8\U 3 X6 aYff"PDv`գ/(g󤿔u9N^R̭ex&8ڑ#9.71IɻlV֓=, ΗZ=bR/w/X=&3#iU%5l2=6{7ʾ)z_疮\VWŗ`vy2#Qc4Ffy ̈2K-2ίAK*>;zݯ;r26ߠc@J UCO[J uDO_!uz;+֯auj˕z..}l4Yf$N@":J_׌Ui^RÁU>3I |%]Pl#HO..߂&3pƱ0O=pcn, 󃨟f4m:!A`;  c:*|,yyܽ4=[}p%B~3Nz0DEx2M'cP ٰ$CdAVKB{*? rro6JR7@Z}. V)E-oꅻ kE\gW X:fDj9iEeG~'˱'Q_|,lpL[a19H௧7%,ncus\\1ȺNI^<`F%t,0<4:͍q\OyUCjlv|i~h)v߾O0X<~6(QЌHgм?s2#?{i}<'}B_; %F}EAyxz\9F7.!*wb>=FBgN-ҟyLJ8ܙK]a־ sXGxnKUf~ϒ9ĠPh dt\_-~YzcU/GRWE7Oyz"}jGi LMsYyhyǓs߭ͲE*@ܒw}Տ57O#T)4oGxk;l96Ƌ1qUpTղEᱻJ[P?XüH2l׃3ܳfy*cZ+.dLlvM"Ȥ^\9t P&+z+.M>jIrVoY ݟtO/x!%"6Lbjh!* .rycgY2G%o4s(Tߣ0{??LiCQGݜ&B-BYJ3A'C>*h11" 1)`Vxhӕ¼Rԋ"U}8)\v&Ɏ; 2:DP:`kRW &Y}K/&XZ?qε9\mWy]i"ѥP_S\<о皷@B_l8sTKR +uG Dd0a= uK{U@D~d/#+qz)Hs|FEc|gQQMMXҀm)4UQv DnO=OZ 4=Yû뮖=[u-U-5sB#К}!|/>Y*S?*'1@$w܉[2Ԫ"Tg2 1G G\mAWxpz p6 v@1$"f0ucD7y9mkCMbYRGˆlgT&h..R 1C嵨abEKUdEN$b͊ Rآf 4o;3n݋Ngh"Wh^4Y^u:oCz!2"qdUZ?JT ۶IUe+^`pw8ܥq{2J׌jw`-hs!O敖X.WKKS`pRc=d)l & b*ݦ'Pe: "KuՇ{qw:S3Iu@| c%g{Zqq$'!䁖|~S%DԿ?'M!5%Ew#'ݷ)Kqݞ{|?|4)rs*ZVV6R*0FE^|x8)e) M^ Kwj`Ti;jQflW\V=Y+gb1}~;cj_ީzPB4VK`^e;rX.K~>-ɘn0$\˼ Vµ|9CiG~VbNUOac- qMHGS9 }$QQ(-.̆#ɼ1!3-w*:Y!"R?bTyeѺjNˇU7)[\$f{&C͠^{xCT;u_{ٱ}fZVo䟷nm!8OlmHoqOiphɶyF5wDnC@E{KHd[AG5<~QV-)Ǵ_S4GeݨY(7l yzDmL8k,3_ :-?!=K@vG|R%0)@ 8`Uv/x Jٲif ciDz`j߈A6pcۃ ;D0@3ZU6sbl (ᾍ te93P˨MUǨ v6&;NM8HP>Ԏ`_Bwm] /xقžRN$=t]ko[Ir+~@!d ,f?g~J(CҒTxWpCGS1~O?ʶ tZf"1 -)gj0Kнc$x`]Ϯgg_s;=x5.wdv"}3UJLN$*(unBڗ{)Hb{Ɋo(p'ӒWb$&9Vk> x g#>S)P=;ZTK&V(-trW֖z5"}s3)㓌WLY#ZbW& 1 "T\QCa]>lS(EX 3WZ+YK`˚E! S"FCnQ ccW!1i(kg* 1^kB d\\PW m!b-@`C#Ԗ FőXUU/AGW!4Il{pN|hxAåقWΒFlK#u ,(q9ޘDJ:"È^b|.ȱgGsŚpt  36愶-Hbq{K}-cH6WtHqMQ(81i7;.bRppR*s4Td#?@t{H'^o̴odTgŜAgXTNA%%A@.dZIkm1HW#}8:c笳ҢT>@L%Rp-g 1b؃f΁ξ5Yg{%*1Bd8s0I,/f,qp]}&@y36J6.75M],ygX|ts![ {YIDWs.*Y8IXc]{q8j9DիKBfK>ue=F디 ;R hXcR:C,WTRL_<逻IlڑznXU5|I78q8-i]nQ2|O6L@zك%$cEaP> NTs<]f\Cg Zk=nC ֓ࢎ-U.W>z 'Qq?<2wm;?e~+s(},elW}\}BU_\_[gVS^_7?E7/lEWIށ?,WE8WW?Ӊ8YI`_A_?.q~{.??n'v/sX;H /aq'E9::,ֿ\-ÛjI|Lї"=@!;\S{p-1VE}P&DF h@7afQT&Se:DR6/+__v]³O,=>c4j: R6pEmy*p?ןw 1kZcN 3ŞdCxFa6 vn;Pp2<8tr{W(W3bmpZ6յqȢ<>BTH^ >cXϼdk |B -kOXKkxnRƕ) pcMr aנԘ7125I'ٚ^I`sC (5ǭ#qp#̹QW~sLjm~ͤ! PcDE& еϛ:/sHb[yq2FooIp蠎_wr*%‡Y}f}3fpc4* Zݺ^P12; A^?=HpܩpS=8կq\Ovuۋy4)^LDI)/l-wvr?8/Ip4⢦&52v2# Oi=o=z[K,]}] iNXgAYI N$)&F nݷ)SR~g)NRd1=\J + Al&zw}9[゜1@`:<(vnd7iAw'IdHpL `΢Lk5R7\kJ B[l-u ; 99]Ƞ,kfl%^kE Oaڄ 3𨃦w?u]^ޯOLv.I'7#R {gv$QNӤ*q^|C#qpعɖL:5nHpJVւ740]8yQL4419!Y??{f"1;JՕ`tL~GskkSys3ƨ<<{T><&yQY]D;Q]:aa}}uѷHpTU^D a~/?/j7df=O|yLZۏX_Tp71ʘĹIQOkP&cPN'Q^)85 z0U;P.-g%tK\dpLn2!o?nX~ LRn>LhR>:`vSPܲc󎥐=k%e ӇI$hm'ቤ,"b,p1`E7a7wr~$//7;v]l뿬2KbV5(5%G^=O.^X i-ouB'߽Y}ȼny71ދq{L"á Z-^hqk>ȖE|?Y?5EhLZ/Bq``d{/mk4`78Xuz:מGcr[)Ѝ=3用YiCP2K$A`N*!$;6$ J8O .}#R`n{7Vߏ^K9<rM,nIzm[§8es)&u=8/ 8z\>?KB5F%|clW4Dؒ]F5־0W!v.B҉*|)wG%2'?qpt%tqfeiye%n# ai1.:%f)ô9\ dk`(k EH$rݹŽ׾)?$ hJ]פ3u2nIpÇZw}NhC<&ָytk~%25-Ȋs ;Ñ9_(+oEI/%iٻ6lUZH;;wz?,(s;N$+y3N~?{{I4H&MQP~s-RpNO^@p9^$ՉF;zy]}sp8j ՊZakgih=Æ/AVO|@1x%e_Dh.d}8>9'tˊO/{ <#{ ^.dvcqme!w GSF}eO3O^."$sR9'zK97}7t4 Pz> gA'zX豝8_#I7`Wc#K̓dG 73~-K2dN8WHmppcHNRopWBl5&P4G3J(j蟣/Y^R& bg4uI5k;>ɻF> A{)uKR F+K,LfKM=h $&Ok=~$x ABI: 06y2rދ(8'8?^Ml6>AeTHL{ u?.w on2J=PR>qt!lx4qen@f .> SJljUe)(,.PL}aG3HI U*ɫ&к.=԰u@%gN/x6 4X\6sÀ\SyaoȬ$nI;9qqu) qc8/m0;誏J쪡}Wr}8S| 8տq>(^[`tT*brK6 ń!F, ;R~x|.ЗD{tsguNdO~)@,;>|;ݔf%z7ٲqw^raYomM y'M`:4ErJu-ZS[莋G GA[21YyݤԤɻX*޲D0lkD1Ypb;CA"]0hnBڢnXӳi2"J-$ӼSBr*;-4h Y?} Ҡ#E4(NH.IC ;Tr[$Vq4Cu.퐖0K1[%RuH,RfʲNAVmGk Lj.n9햪vIUXݝS2QZgC&v3ԘNZѡ!:ۑs~&K,$vGRj#W?8,o4]:HJERX*^̑E2ʍ?YW1:NV]mZ,,.} 1*~6s?ʕK {?%q]U?#jKfҁwRV=D8i𰪳}=~~'*6ȌWz\q1*/6]lEWoemr@Qdmr7cx61Cx^hh/tMۋ0:#)ӆ'<2kww^޽hoqrӧ^ |vE۱^*c5վJH-aL33NeTdQ*;/bIVs :.:sM" i!l&3?kr9TTjY꽜y ER,7_^dXv&SjyPuŕ΂Ȝ&}mIKKZK^=ܑ,],Ÿ- ˻qAsge}yBņ7̚lo|& Njmv22Qv<(&|u6R!G1G^K#DzYZLn#9"̖8 To<nK/fvr%@_vr.^^uZZJnۮ@yXMer"/Hsr+ -",mC"ǣkF11&d72,WjÜxw)/7m4 ^ziEmOf }L?V,zf3п /bܴ_D^e@’AeJ:g}9vS4u:4J7<u^@.[η 1IA|!t*BD :->HרRo"U޽@sGsY̿ MFhZMr\<3(If$qE,f4p$<PkvJ)rÀ~xߒ*5߂WBZ~/@Mq*K[81PYټ ~cIe:\{(ûߴ Oy~tk [eu|Rd_a_ mҹ)Fo^Ƈ]0]1UB15T寁`FegH.]_#F&R܅Y5\ |wZ8Zxl'. Ñgۆ.caϻ:cVT;СGav)b1}du*OT*bOu& Uɾ%Vk$xxy[!1_x0ĈOF^@]}\_5?y鷺1Em-Ӫ[()g,s3kyW7ZN)+G4amN>Yee|\j0Q3!.1{ "igr{kAL (Z[:: ƈ}i1bGd.8_l7[\ۢסV ݶZ92K8KS2hf{O@@;X530g]0AےL* Pfⲣy/PԶo.e7dO?NֆTވ[`^1F%LLA3:0[)%06Q ypRKc&4^&1_AiYԟU[j%g5SQ1r"EgIcHȄRkiHe%Z=zZ!e5V/lbO 'Zcet*JQg)S 8Qk#fhS Hc6JCi ,Xw NPk 5ZA3]Z Erܭ0 ם=xr@;wΞ2b!֖oBq0 L 7n#w0m}] s6v!ќKFӌ=i*4S0 qӱʔ>#X! uRRZÿg^KC4FK(Yl):\Lfyt8:\\KXkEjoԳ4fX:LVp4hO11FJɈbF^~ҪdP2PBiǔ3r{geYzA3g"F~5ʰvKiT) 332S<ԄHKBe`7™D/D2A#m2JJc 3FϤP]6ͱ.RD1"GMQ0.d)a+P dzq<7@{&- S,*i9 >ѤD&*nCTլF ZϝGD`MτTXG} `SA(!K.vckc;CvJ]\Rn23tHsQ#P FI]&Źc"(g1+h2iE˵RI qU;T*B0m!$nTxQmX3!HL ]%nzY W^|ލjaz2\ך:YӼ5uƷ+p{hpeivI \R!Yk#[Fr #Gc[ѴFpf~|||QCL5﷟'1$Xdz: F Ryi5^Z,,%49 tf*nۉӨй)lYvvև-a_X,#!HI!;g@ߏ|n,,S5y5 Ju`#3!J; 9&{Ta" &u1jD"ܖCR)[`D繴T0Za(5,^=ύj# x`I/{I6&(nc6&n@VBx%[l1 nr! ifywLQ \Ç ;@MN5;3r&Dn[@7  nP\v<\ϫh9Sj-7()}J81* r2+$wy?I{>NN.w*0?EVb[`^޺v ;֛X/8˒BLd>;ϮaMX1ĸv96;MQ>!*k[sMN$ěJw,Uw#1K(a]e^?l+cb; ;O52Si $#L5V!^ѧ4I$t=a<"b>6Of\КXY$!~q0S= \y~O/<):mts鮹uOG̞Oc4贮oBJAKx-nYF ˈtrVv:kԒrҝF0swN̢؁yR+*/^+4U(<ќ[hV>3>4(6i1w&υƁ bvTwTyd'`&~bLt@mqwMe1 z.(E5Q4Da:e4Nb2;N"%x*RmwOPǬxʗQ^qpyx`p.l]K-UB"aOyW BpX>Xd`lc+ \Jz3#TT1eer D)I[/`$g>ϱ}07, gXznORL|ʵ*-G* ϳDȞCfa&JǑ{J兗=]m{4'=Ih[BF]!8U0!Syp!#!GKp@&\Z8G)8|E0i-z.&uITs4A%nPLw$"6I0C.1-v;LD6ІP#N^hnPRevHp>NpBBÐ[b]fQVTpLͺo`=j/0( zT#0glգ֭- S`y}ڥe=W\#6mHC- o Bӈ˭g ʑ]De-cf1GɣxT\oOyE#qRaClu bzCG [8gWwm*n%FeW8視|=VF+)g{0~3/&Xp[ŕVqi%Gm!a}tem.nYІq ~Zz;K=@WGSq;~kYcy~\~cHLjےMdž{$N#5c;o~_,{h0U͆XqDX_덝)PY9: f?A1QZ|hI(8r=* 9F1-M=[덥կdŒŐS V^ BJ_թ3>Dzc΁ v,ǽuW߿$;*R|~!0>a,qFzW.#P! *Z8_^<5~\\2xuF慳k[Aeљ͂1xc7ͫErljlאd/pՔe\, >gVoa"/i9 >l^WoC9I"z L [g= Ìmwo$J+ U]n_Gv(mդY!JAUV]0ƊZ eu+7"O !wdI8^[l)hhQM߇O0($ n{gG[,kLWD [B:=z8i,k ,㰆̅i뽯l/:aŢ痯i79f=Vp%FQ:&Fv9Ci;`=@860^Pi)CxkB)Q*;[߾fb!Z*Cs,lC]|{e)~ˠ_=u URlMZwAq 9]N`,3 KQ/@.TX|n8M14d:L[?}E,YquA/?G4Zˮ×az4.+3}xPʅ5)GfIzE՜0829G3DchH5ʡ۔r`j((C eށ:!{Nzzw: O歟'&4]_AaDNoޫ¸-(= ik)0%!XvH9mgMKrJkT9*^ u٩+C"!Hn>kG>0zto)&\ÓZipk-nu4FrCÐ^*a ofaaG U0o'Rؙ]L+Α@Do.oJ8V0!=P1rC3E6x(N렷;(T`|"UNJU+7]Fʧ\q>~/8p%'G8;? T}16eYUk<`{渰(W+.Pb}؅%{W03 _W?_) E?+\Z6g |ņ|EVU|U _g sJ(|u|EN|ݫopxEP*t^T[( KixjV6;NE-T fȻZk~%DmW^L+tgt}:'їv f&]p vp\0raV-*LW" e/r-;SuKTh~jۮB"K=z]8#B||0Pz}ݫ-)oXj2,ͧg^{U <_ y㵫qq6:xv]Np66!oS5`W`TƎ\<fƣߜ9sM˾~[^^x2i6,q?~W?H˱;1ajC2U>VAK2٪fvW⏃ˌǓV`;QW7Hi#6w'з-RV `ZƞQ.ʜb4r`#h⩛+Pq彛 [?brZ:kvB/xw\MT7=sxV/+buG[ÏW14 턹j@-Rx`7wf2iý|}%Z4DW_28)킩58Gly/|!\dv0Ua(%\yplIE(w8{ 6%dxszrDA|VkV'fW-~A7`$'ӣљu^eFop^X͜L\N c8ek\+.Hmvz$79Qdu.1Rd,N4ynIa)Z̼DVbg0vXm#614q6{7y* ktY`ã+E2r,s̙0 يiQL'Zq8(rsF涀И1 tj1M湐9[i՚TF.&sPx{ *c$%;9N2f!7[IOx3p'Eٜ" ,)gF#[-q4sW_UوIR|1fi$Yv1uk%O2!6)IwDxģQeHt6KVáMZJvذ)YiHx.V97 ##ޅЈ*kRi-ƱD᚝e tGRE@K2ACz_o.׻,fkp(s.L]5w)ɧ\YT*V^8U9YJ8kfb9\*'-o1#V1,mnF;Ij(-HHI?AZ#MA4'mDivIa TCJ@e1@ 2[]FŃ( xRNE"fHb萴lUS&8R3DVӔ 3HTf.PUR5%=~t"vTRZ&|[0ϠV ,H@%%Z@U0v+,*V0:U-; 8 5|휩#XMQ0QJk`yU+]X,dp-*Fq#ǒڈ&MtE (hTKjɨl]^@[% Kh6!XX,ipZ[& sDzSOgbu_zeAQ*nnYmS@3dml{(;IiVoPkbﵦnP'ՒGKBmxDyۏo(͈-XT=nJȀ5alI1dsC ^PnDDh-!?(Y4(8Up7LtЄXe昑-@OOT|@Xo٬׋aر*OSAV$R)B\5(\u!E_< *AQ JߡкE#BHCEՋԼC`TuIaPIȣMFfͅl? Qcso76È,i;5֢Y V%K`I*\Qi3\ٸC69y5Yg=J!|5oX k+ rgb`mQiya%X)rLi/ZBLtjG ljjSCk4JIJoCɥB$ E!㼰TRfʍV0rȎYz:3c* 1iPbW8!Z 43y[{ຎAyG%8D'y @_o9ouʹYp7 (IKI%'Ngh,az{Ϸ|_^ū*B0$:+"h:b0PE|_?/{Fۓo'J1;`x=Ys?\\?.zRoNo~} 'ާ׾*_KQo#H-?X;c:a]mCZn_tbT8?sRHGyNKT?9Cg?|v]6L.v邼S\ׯN>p_gyMܴ|!D־ip}4=7/f-#165'{iRߝ#V\!CAA$n.ڞKwÛ{??]c[^ź^=^wlrl'_6>9Wox $*N"瓋ٓ~OC.LH[=F< Hp ٶ-J,ZEDoIpU[r^Ñw󢫗 ;|HƝ3g;چ]c؞DLlOj5?MIp&rt^v@񟁷D=n R,&eT~RU->@uuT`)Kw~6Ne,||R#]NE|xf痫pzW7Nh^]ܜ] }I#B]O^ЏxVF/?KӹsX](^]tC h,NwN?~oC|jr>,A2UVQڟpNj߽`%7_AʦV:1\sL PG%/Ex鬱AY|2F#QUlx k8Ǿ[^T ED+9 μՎ|oGcUmϦMJQrcjF^HYmE'd]r&fب|d ?P^gSBkS*;QG "{DġU]wv;~n]l  UzX].ΠeZ'ݓZ׺JdM 7 q OT==W9t#2.z&C.4ڜ#m94xΜEZYآj,g49TY(HJhedd;Ao P靮jO? AQ}c,^PjU0)f8E#v'(w;~r _w?{r'KP4N"ri•Cgd_̱PygSH/J⿭Xj2'fت|`] ?TMήWUy*B'U#5tg̱p4_z/w\,Z3?ir;EНYcszUgşU>3gӧW'Q;N jȠ6%aV<ڧo<% ?T!ͯ|g~MH҈F* MXǵyE1F#s|l*8ʇwO9l8D`J LՊwbnS̯mLx~֣9fμr{oEcFz[ ߅xc?kƝ!G/Aer^ӹЭ5;(#u9φ6 VM82x.xf%9z3?|OE>=OW uo>&SkӜ@73~h8g{'[A+dk:U.oC>2P5#n}ݬ=v)ֻ-GS.Ƌ48t8X~j{lM[ N5U>5t54 2ϘX*lz\P ޴(sNR$-ߊC {Ξj>ޔi}4/7rz5vdg/x~}O[2Dt4ϯ٦93k|ɐAO޵6#b,e}l1I:-g俇e-u[j;9gvTdUůry`^!@1+-,MN0.hTz#G(BHZ0cM͍ghiHҡU΋z :!fg `q-IsIٍv$}!乎ia'fxς{Z^݂[{P.Z\s谠TRr_~T=@DU>B5sBJM4!D9ϳs?ɔ݅OP ذ=5g)CJ4\^3Zϗ{=TI0>]< ѡ nL>yɭ0h~ :<g>?dGr+ 9~@AM2TV{xH I RbG3{o;2ź j}ۛns+;%)1H]2hˋ'c 4|דP-^nKh;uϯ}gŠ+ca7x? zQn- Ї./YEH`z{\OJ {п_WS/ҾLw7h\jg wǫ/~V\,mKuܖW/m c\_KW~vǜ[Z7jl KoX8G)ۛҴ޷#2O11%()B#@xK>1(V /պ,XfzbjY ta&&PёWB6|} 8/b(8'o r8lBy:WMpꉲq=x "7(cۓ6Ox0pB˫ ҇* 2Nfc-[./,rV+À yAçZWP+;j.k\m~n)15|Civ-ѫa8#^jbqg#zf^9xPM%HÈ|BQ6~@VJ <tG~8FU;5 iy!EZ%&kpĨ[hT:܎Z:Z^Ǭ%԰\expeǐzi}x=.ŬP@IÈfq}_2; wbŘ/#e;XAS.Pt)e1/`Ea'܀2& hrYMdZFtA 4/H&_nĊ]Fcɱ̫Ж?HSMtA3}Mt$abp19>/A̘ܣ?=HAB-Gfd`?&l;(Uxri_h <@çY[ޯ?`6 (z*ahjWs~]nwq"-DoK΄/sĿ>GHjy|&!✰1{^^uGhDˊ^ǥ6ycgvf$KHږ 'eFoǦ2@ nrۻ4a"$aA6:uQeg &ղ%P hm*T&ֱDAZJ4Roڀu!vb֥z]P^ؖŠwtw-G:ZZ(ɛ, ̆E{]pg㈖Cŀ[vWocu'HS56lǬo@MK䙖&E%1eK`HIaT|;ycTFU%~q#zX@£(EІ8$°ρ:8`2~&7bnru]@0RDz&xf#FSzM "D kQLL V䰶*iLHo/s >մ7 "5ArHgH-BL4y  jۊx x~(^ pIH\sC&ǹ1p4,=4.t4|iaݢ2ic7rIaa>0uN+z;%uTE쒜V8YnGqF#JaBI-bҞRÒGGUgUS[ٓi_ k(_>ҴN:ijS#LvMBdK,wr>-/yVey[=Q a9޲HeԒJz7\xw2MGng2׏#(d@gU/t` Un>95\3 o<"&-E\bL)LmɄ@'G">̜ d:06)Ho_ż+D:[&r+& Rr.jz鿯mWwyuMʱ4Xb49b%RFD(ϑG:^M19lTz.tp3!R[N!A]z\^͈н׀ɾ[Z.bL  $J=.h΅Q֘=YvQ! *^)1vBqU[Si]2pB:xW >䢭~(q{)"jːÿcv{ѻaz@Gí|17c/aHS*A`~OUzb}zlKblP WΗ;>- :Z.T DsٿA7Hv5ϝ nx _IkMvٔo+]}vhI~)^ {>4#$1sm~CƗFRmRp^<. hT333/fSHzQWD&`R:=qo6YXv9D'E{vR.;)hT@Ho8]6mW~DD N8FƊDѤZc.4|k E~?m)dE*T܎ }"e3+<bD$u !S/O:a$@F7 j{3;0W(Xꑘ"XKw!=w 91S*^.~TThۣg+qvrYFYW#Klߙ2|n I@آQj RP: R@g&-%>ڼCj v@çZW40]8g=KfIAQБQx{OU)}¢n: uA,k|9(H󐐈RcE~ CO5xE,%]@麾.O- At4|:D{E8]2YTvqc{rI{U]SMLM͎DƯj.C.~ƈ9c@nmCq_;T p.qۯ (9Oq$ֲR(EJĽHP9pXF#4|QɌ؀5t4L)IE0]m!}DCD4FS@u!&R%.$NI!% 뙀O5#I(IEm …3|JIal" jO[JҤkAEmH xdȓB̈71|" WbHQJ{04>p/aiabi2:^CuX Y!% Ī:.II+oGAe: 3}lY1>k.X &IX 6uN[RjL:Trҡ9'~z.,q€U>??mGHgeI"kKކSu2USMQChyΡk)iO{kTj"=XM\p~"rv_V8_~Wv{qHR͝vkM fٖŸK}7no~eʊyBj]@[$\;d<{ZoRB;URU|:{=;*6,+isvZ;g8r-W,F6k 4$D16O9Wh@oQqC'Dч3>:q,C>\9D3:ylYOG/H|+hpmۧwWbruk`%reY4l}˿',wJ[vbTLhlwHvec01>&Ԁ#(操p&-ё%h@GepAR_uϲHܯ}]h53 dY)T#{𿛩*)S JH2`w>>\ᾀLiuzT"YNՉ4V:LҺN  M~"!4zz`;YΟ<~pG=|8~o8(gWpљ8 Y8#I` dq1s5_qs eD{p^$8aFϵIЩoqRuM4y``;{?#톂42DR^G/h>Q SS`i#R^5(¢MTMrT\ v;g%' $oN50YV>~5’8ٴŲ9h^yf厜~ѴHZ5vh@ J$Ga>@JP={8wλ-!fxCsZˢSgl>ΦU+`;=;G%ix-46:fY*eΪK`6kxE X&x3B \)7ƻ-+}L_"hҼE9L_&ou~).'(!QW׍ ͧחuZ9U~OUwqK?0x냝&鏓 >"iuo"_umw-E!) ytΑyZMiyf| YBmYOFnϝ`Hݹ󪔂Am aO8/pb1`izWMwk,2HbK/Ⱥ{zmd{0PUNK7_]G9.JxeH>FbD1t^^o$d=@C4+(%6zk*5$V%L#d=@c$sV[$ "cjBP>Rtd%X6[Bۓ -Ed#lrZۙ݃<CGC /B<8@So_s0;[RBۓ e"UN  ɇe@!xYv; A8t.!sb :ET9}j't:wػ C< >;Z6+bAhE8'ߥsBv >SN׺gD qLU+^ WXH2<"Q]g^q@GNQ(CEHER])#H !5\:!.Xヷ1r{AVo- ގhs5`p; E(1e,Ȏl3[HN$b VH$#;.uH!x ZcI~=)okZek~2-.օR+OޜM恪+r@/*1u4PH0گW,a@` JP?~ 聡~4JjvuLMbeO)p:7X&SŤRj-q̃Ru5ki=Z-.zGM~5!Pl(šZ"׎jJjONjIV51ҚqjS*^(d=EAMh{(1v&ANߟ? EҿLޢi;I+hoO?q\[И|Mi"&}3?0˄n@c6z B4("2I9ղq2bM6R{1 Y}O<8a(4I3^u ]bfVlt5y^̦L^zh:5͋ԆD'Vu>-O!Q0O3as9mQh+?BbĔ bT2twG!b1շm;!xo1jy==Mg?lY֪,2CYj%uDBkQHN`C`OUuv"(@QC6hmTY7Q/FPs@!n,奢SH) %xi8 d*ZsnQ@4(v)7yxSyF@*B.G0u|; o0F׏F rREaHuEXTb,Yrd|~;]jsngt&ƀf(كC O=SXu\Ã-4׏F iBAqfx'b]\x*Qxߪ&ɸ Ze~9Zi~i0Z4 ]&xWuVxQ/^:ߟĴɉl6W~gM's~cG繆2ⱖInFgr2Xo*D)Qެʒ:g%t4zw;`=w٠QBfr) w;>[ 7BWQ1,H | Nr`U֕wbBwfW X+F6Ƕ⭐ 8tfJ;9V.v^q %I1[]5Ǡ 0x62VIjU - 6+泲1ΦsĖhuDۺE(%E*?)ᵎhR~4JhR7M-e[LYm0ۛgdK0PR;+|fp9KSp8y?~8bX>_=&ߏF.]E$'%DrHz8ѵݏ Y BGuRb׀-tԂ\,J0"QrI&&(NdnwIQh"&Aƫ|@I*11mRtTuoa ^ԏFPf9Jf\jA3 0*d] &!ʓ1IykءkEо*^qUE;:h)x@q 79Dga'I஢$y!&`HBeMH|%R'~Y>7|GH!!ugxe!Fǿ_i nQBBE;"|Xa .6) %^G-fsQx2k\F 4؏FA-"EANLJ|?-4z1S?E(Tbxr8h~4PIw/|V/1D 4i/F#OW!mд$0EExdotB|iSKi]|*Ǵ~ 49Hs]ݳ  Kæ赣꫽q;&]41ҚqjSm&$ vNuz'F%ꁝZ`8@c6H IV&X7%Mպ1x0x!!|B}P7yr0e< W_d.?mZ/ ~i2_'0]el\wf\5 e_m.ާi YVyFYh~FUpgc58ɒyM6y =۬.BlwOOg)0E>,j}/d&&rM(#'` ֱhoO?N\`f𹔓/wE*_?/_oi7,ΆK^ҁǷcC iy 1vˏ:ڽv8pG-l[?1.tkFbrf9qǝTc&3jes QkFt -I7Hd3))f‰Jl5bbZg/ﲸe!2bnXqI@Vc6g4!Pk%U~$8g8^3(wOj=#~6[|Gc&su\ڇEr:T15/bǷP"oG୩j񶙛ͼ!Hj*evWcM׌Ky墡Aܵqdٿ0"`&3c0#vllH-YIj:!@Ů[zJh(Z]c8\̦龍}9ЭyqA2>js^|Yq%x{ʖĸ'*t6uµdK~~N+}}W9 :1@2qۇs>;~;qe6_/t4uUi@+M.բ@fZemLۦiMv>iܶ  Œl ׯ^5_jWa՜~43to?z؞>ëZ:xVyu8w$غed3]P2UL~czYJZ&Zn^ۊ24uSWG{?KZ.*xst L$J.ͻ7G[V2_s˒Rr:(t~J.˶wc *,W/0}-_/ K*ҽp] \y٦O]l,0oO>L/v7|Gfm~h-] IB͗Yϗ95?߶OۺQ]PjS"-93u^Vdzߤ]+L;w_"~Lr|J:~V'c=<m !H8OL<(W "2Jl2'3k\D.n 懭8HsM(ߦ0ܗv߭y=g;yi_Y"~,ly7+%MN9m(t9z`}_s@d'}=mhpVp6Y~M 7%LWZ&vVKn6r,ItӲJ{rۘM>_+0s>뛼Z[v1avI~?^}-ez,rǻyܫon/ug}]m_.8 ?H0m;15Y%]OK0@IԊO_MM3Rtuuڢw0g;X ^tZ]gI' ?[%O?kZzQF.oV鲹Y||d ,K%T̈#be薊$W&4`m"?C { _pZ񍎖ۣig8 ̞Fz/yq;6\s\YB7o.wCk`D/VZm@Z(r=~'obhrGv^zXPOr,ކ?B,+$̀yϞ%2?XQ~߯|]{FS[ZtS0&н8bNyE(^r8玚hv%v.QqH~nM:¾o[Cz~\/ v4Oy|=]e:u~abz24ôCe޽"hp [5"]_t/wKڻt]gQ_wM'lo6\(m) }fz,B[i>9'Ag4Wr+yw߼5m\5-g[A(Jӝ?}B%UB;d"r"e$Gij\& mpjkLL X‚>oIo]^,K>4Oyg7W:۵*QOy^M-VE:iT4QT\`cɩb ,.A`+a*912mƵYlslù -, ;T%U2@SZ4$,ދS =!nAVSY2Y`"V(+$St0`0`@ 8T܏)+K丈^<`" P ˕3V40[A J <,(ZGZ⦪jI#sxgG̨5wAd I(&:ʩllK@djA*2d @P"h*5c@Hqps$eAaj` VY,"=W/:N;U ;ve"R .-%u$E>/)`pejDX\j(@HK{UN9!BI1YV atS-Ct&>Y?ON[iV06SRe%) dX?v7|΁wy2u1qWC""h6F84`=nbTnDl \*:@QR"Pc2;V4,,C>j6yЎ2;f2jxrI'#@zT̨FZFmJN @Ae#s40qFc l#`: g@i4@?x2ZF v7Hю%pGX0FaE^Y` E*C DYl9i=r!UPg!LkTRp0xAmJUΪrMSoĽ"Zz7*-I!. f&%T`&t.YSj:8,E?HcyʭN:IBojy\C$h  @][骅 #I`d*lt& o݁fh.PkHZ\@GʪA9q=(g] |Tk>sGc8npJxKdM:39P.Z5H:%\ EeBAt GRT", wB)WV k&iD WȊ"D W}\'W SoGu63 ;"PA(cx)U,JGIUkz8YX:P.1#d&hâHY=+NFp `Niu)<725hRc |,$LA I d:TSEchO kHhMg TfVQ`aa:XδtaY jC/jC X7)#"`0GZS:(0qKZs3zR [0+^@\0 懅9mFِCL*V :O"`.ńE@tf}B!0"pŒc'(.oH҈56B,?t Sq1y# YE bX;ȡdDie& 31%qIPBO?}qbL'_ 2hQr}Ev+ݧUACwib>?|SO=-!-w򈛗\E‡=P`2A5Cz>)<ԝ7jJ# Tw€'9O޵qc" $W^L'}؉djIJtiYYZxǑJT}d1 `&F8Mxݧ[ /edL ڐp,t$sȀ*M7(~=Yl%ٓ<qqd e6%u:2b:uto.IJK€kA>NNgf8XLʽj.^cpLeY 9gDgOq8z޵noׯ-m#gdZwh\_r^g'^g>&Oy)Q,|/ܢ= C$1>$ȿ3VEHK'Vij0L"o)*\2ZlV6aguβPֶK.Ma#&rH0!(,͸v_ܻncFx2Jc.nU{7) ,(*kuFbqw×($v 9/r1PIeVap^:7..=lr.!nk@T#Їzߌ]Zy6߮筬?z5q/GgYh16Չ2v ;A㋠A h[WF]nheY!:jo DFj`:yv4xBд><wNMN'UX4+\8'E(#Cіrvf@cJ`OXA,A4A ;]pg%s<bNI\ JdPY121W_[ ?HԱ>pv?:; HGB*_ptR V=Gh/1${z|9.yH’( ao%5{#78[b8mW\Ȗ% FniFD)CU(o#;OB-Ԍ洑r㥓[#RU5"yKXĽ _EJ+EBB֦kۮOใa#); K83 HfVe>uv8H`Z8c Zlt~fPXdZ'L M`x^UeU<-#78DU(Z2$ Zμ%7Q^WABLIl$>ԠXC?e+D\ u<2"9IM'l}.1bw?PrUTa!%H0tQ?ߟ+/mcpE(@bN8kZSl'"j>,2Ƣ: ¼GeɨC-<2#-w]O} wd:n 打NKD`#Hi2t%U>͓IwM,mB; އܰU@05HqcP[Ԏ`XыQ1^Lq`]EK㓑b;D3#4Ƅ(H*jyZp#OZxdi'Ӣc A҄Mif~x>Hrps%HR:oLxU*ӕBޖZ1ZAZxdG#/ѡ $,XgW".׆ya&R3" |H>3$$tdKTQ P|X}چ|eTIpq ޭ($IՔVrBJXeRԆs!0-NS\V:tQ-<2S7vdÈ:D+ L "g\`9jcv~b~';.W)Xqd .*f^!%\@)J%&t1>^BoRgr'3NsgF yg]qq6&LABqqXeL>ǛԥI?3Սrh={U..3j^ܰHK1)m*C'8ZxdG8Z0A7RA _bL.{ΚG^p4Π(J N5=N%LyĽR Sđ+*q%K%D/DI}4TE RcDόK87Daxn822=w8eiW8ʡ؃bz jET`O{C<݃N;P_eaW"}1sb8PJ\@ u# @Ŋp,8yPplvjyEA0F՘?M&ꜳ'EǖoaeBS,hqK#38RùwYb%Jj5_Z" 5diϟ85a<`z*Ԁ {**P` 5DW `&|::bUf[n>^%Q%d%o>:8x11y8}Ǣi%#A'1Xt\aָk!)沝xNOF~Ufh&ڌͨ_4[h@ ~Ifk-FW`޸Gb-*I$Qf^/#=zF63 *`U#38'vJ)J&k}zRf]Jz] Rq ̏_ʔ^``)jNV6<)/Ggƅ? we _tPo»a=\ ̅.4#Zpjj2BhbA>&nr@=3 '\?:Nv:<$Of9 iGU|2L=guLl*/ť?q]wEk<dtzmz`)PG{@!JVI/u6UwV#Oj|9 - 8LnwVggo>щNf%)Be5TioJX*t5!n[0f.doj4Y}ۂFx W~Z][-jS/a$eb&40ͺ .y=| ,-=v/NϺy nhq~n=woλ[.i3-\^D3 AM5,fGALcXꞾ>/|:;tyt. ]ҟfhhZt -,_HwA.$(k_f{$^VQP2/؍|{[vo)#To˱s=r7r#s5%=͓t/yj0-r^Ӻg=N U})b($}pٲD6O4k>qeoUI /ْPbgU(К7&ˍj+d}Q>-< pOwht7] z:X@˻ bW2uǯ6u(&߯O#Gu~(?Q[[7f.GsUkn8MfV\ 4ѣm4۾|ޅx}G0%s,>߇/iº/;n.&*-UdHT\.$qlmww 1I-K1ѧ}pYwrZ{u׆rʑTVi&tQWUjmVV/{Ƒl /䋱 l~ g,&Q~{(RH4F$!ÙsMTOyNe,|ԗ +r363a%[-V2{x7{]gǤ:/yU}x]KC\ /D0ޝ~J^{:|_ugośby`On|"MCNk`E^dtա?DMc6Z>s+&x!7Jv9hvs14UVs]yNjw.?~ѣonYj@UǑ$ ?ֹ"-) Eof Y@NZ9O9l37(eyyuYA(?Nh}1)W8k+ӓग?48'xZRA :e>zc|xEm<Ք;z N ͐gC*V W #Is:mx[!Onycz(Ƚ{[o6١B4Ck6[Ng?\?hV?mxVᖳ:Xo6$ ?v9nj}/Z<{U1.#S^vf{U?}4G6=}b5RޮMmP(|w0iپtUmn 31p3>lLV C 5*稵2Uo]2JҦ@V -jGi8;~F٧?yafk}7ʵz :28ut~U-,D͖@շ 79B4Z絥dQH YLƪz}I.I_(!9 ʘm2ʸRQh4n,S4JF~iHGM\ DA>WAmMTI>'͑5*-N)5 U^eqC/T2:#H*QkR"cK1L.);*-T5}!sNһS[j͘\\TH)L$e`Xhs`ZK-%Bؔ L)JZ̔E#xOq=B;,Z˜)roL-#MuHJth0ikBҰ&U`BE%Sj*x&! 5aTKk޵,lM~%|ml9 #DCnNmxS/1& |ReҖtp!dHXlo 'ֹX "HUY:QS#J̱6g%9ʂM/pr-;Һ0iAJNj A?BJaiY1iLH̔ 2BpJM`!!S>im-UJFxVIb9kOeI '\ J{+[lڕ%Qc& ˨ TRȦB*=+ѥ${JC. (JGAEx.bnJaY-t*X$xX$A  l3hBuQo`ǝ$ÔN|TX="z]C Hd!f@57._s*+1h a) . X&PI<IۨY"2!{J)):*,,tAM;‡ p @]0*80)Й!Oź(ƅ5x65D&/2Y&3h*ԙ@qpʈq‚MY;J@hJMpe )m&bu]FkX5=)(p}Ysʶ1R)~S!=C:- ՔFDYl pQSO۝@VAb`7_&M2:0|k`3f1Ee ՉIJ!LM0,l~픉 Csϋܡm׼kK`F: `Z:pq? >m8" JF%H-G نPo,b8/TXP@jIPA"eu}!D^1@&tOƅ%>|Oy 蠾" Qkt3+76D l*.&,XV<{UlΈJl+dəj&X*nr>K;8o1K*a+pm a,zХt13Ť<$J,xܗ 7tEՒnR"pc H-,6*,e ĿIn$,B L`_mkQ 0dJ-C6ѦqaA[{tyu'*7@njΖ󹖙I4?^4<wPqt3(7Q\٘"m E ̢jS֘Fj9GբFK7fQNr7LŒТ EzwàD6@mJ e) ܀D;=D˱ηψT@=@e!%t S'+RnC"616m6 ?f3 +BJ1$'WW(#oo'. U {W(bZ1Ԕ*(&#!SsÌG=gU0?y. mDRA S;SvV댑#5^4XF=P(-g/U5d$0r$jd-ܵNqZ-秝4)Q#/b P[;ih 6\ 4<.paVʃ-.&1E zQX&`J8n G9>o~rO=|MR₞$w7װ4LCqaIm6Jӥ>`eSFtLjIRθ $8@A+9b@$Di͕AMeaxuEB@xR o`IBV/jі[qry|䐆b(d);ؚO߮Qe|5CL?2YhI0R/dLWM5g&կdy6+Y\oKJHky+YJi{ʿBʛ]Or oäP[VUBNΟԓV!7{e{"WO\='rDz"WO\='rDz"WO\='rDz"WO\='rDz"WO\='rDz"WO\='rDeS8~N?A04~ +)"vv<0ݷIӵ}NOF̿CZ_Z'(ڃC?@ȂshΉgnyh93!%dFwWmhj9Wm}NҰM"jL~+oop8*ɡJjedR)RU97>7gOߦis`nZr&E F:BbL& 5͋BZCNҙ4sH''d1;bfߍ~w}apS![{DNW 1rL+1rL+1rL+1rL+1rL+1rL+1rL+1rL+1rL+1rL+1rL+1rwL6N2{w!|vYk-ͦ:E$S~7wCM~w wS<݆WuO˵Br /2) 6Azw( ڦҢ  "nĞmwol.lqzI4wr߰^uFխ*ϬH(y˯xtvriS! :p@5!i+rS:6i?S9`):yֹ2/9xwPZ_.m#?-S±7Tp>!Ϝ߶%R'-wTb"v'%;@T8o̿Ԛ~m}X: j̓^ \V_olՇd6Jy飩.d$&Nn~pDkh\]e͖ҭbˋk_X=)OIjPUhkUq-HftѼNXUXHPle"#weV,2nzsv~}vm>}ܶf?\,?w-mH9MR0gzgYD>C )OGE`-3$*9k I )4D3K(DbП Lo>%( * s*GiJ䩰Hs.0;uaѢtl\vbp5雍WE[-6&?/ao-}l1 x{ WH."c KJˑ<ÌL2Msɸ [s)AsHL%1*R_)|U h=5GNJ]v#&B%- 3_gשmnj11Ms^ X|GM7ɜyF,v`"Jcϵ!t +qO\1 zGXv4҃FEpFDB IJ0@`HH_NL_ji -W jix;H5Zk ˿T#夳^ϽLXxBAIfn|杳[=r3( ,O3[lmWou֒k:OS<ٞnr]ŁA|*PzlmFO͔%Xt_̙$bkww g(`dRu͡uDк$8FdQym-rE}j½SC'ntw=;ɗаnGϕq?`oȹ6%k]97@6>Sc%Hβa^_k5|_=!9fhu=2@}fh$YW;uזi`)4`>iՀdo }n–I}6mvyLˆGv8Uy G.-20loL/|%y*FٝZ9ヌwepeP`./DcԒ64M74N,`F Yˉ Kå] zw,XƚcoB\ kc*Nۓ #j߼_lv*/lM'nBQ1gcۭbns9ۉ^ Mnu=:yddn7rBk]`Šp>[g^뼱+PF5n/Mbt}^Z_]s%7 P'7;EoY˭yÊeCC ;j+ }@ In}Yf{k(8V8!3ARgL"=QYY0Dy{/czW1" c e3 ga9㆑PXSD:"  }ިo>{t^\)0B±c0y 68b}ja LqFx8&&ӆ,$֛)hA1a{$G >fX2a @HiiMp+u۫o9ZB˽a91"vAW+jPŐgZw`e3oJfcQĭюw덒fn}pG_5 zΨ{I D>hp؎SC0s`>_ը6l5V$R .X^Gao,  r D0H՜#Xǖܭ5/6X+9a seza9Ψ0"Pǒ*autaVK[[ooi>l}+4GNL~Zl&H z3 hi"BRfV L—+q :hNրf a@\"0|LJj.gR,h._N}u R9F8P ߯ Aݭ^i_w;* yGGGɏW)fWRQw{De^0cn/>Ckq|rrBE (EXtNB59姜TB j'T :%:Z V:oK'CQ҅]M׉)5H$]x k!%c>p$X?C{6(0"7S_}+u9~uTu7y9KͿ'?̔?A(U.$pQuˈ VYGU*hu:ZeVXUъU*#:Z[ҭU֪tkU*ZnJV[{C.-=Q?UoW巫UvU~*]߮oP( LYDzQ"3Fh*%)^8rŸAש~(ȃc 2c)A|AAƉ 3\҈p]#+/(Ds ǚИJ1A^ZE02<䂔2C&px:'tj4Tp6+d+#vL6~Gr MX'sGL00j)@q)inj-$ [aMRI2TbqnrP3XADW<[ 9J4f,-ݚP=I"fR mdZd<$$@AY$2R^f*O-JK7חrXCw˕2͗qkd~sMT&ʼneFH Z#+Ӫ`ӯ kkfn3e`[Vsҁhf[0B!w+I:R#JNIju2#VVp(qTs L1e3U,TУj^Y)45b T["L,i)lēgX/T n̼~x& }#7)aHb"%N3229Immq{BU[nɮےm`m챁60i_\u?t"-EE߼KgIW/*v|X}#B1y-JU|{5#s֯L{?e3gdMV`Պ҂ "u\6!9;aZ >{A3qҝ?8o'#Lfg,jKΙ]1VV( װdKyU`NCR< (A2h;89԰'a-n˘_8 ]~Tmc5uwĥj!  ngaO{b'^zC&2dAS+'83:#Qm>ﱹ́*Jv`=I⯴pEQLsn'z&O_מ֞?{s&3]^82K]!o}T qV<ɿԦmqzH 5fku' ̧$4 )5VIг! ֡ugڟ۶9mzJg-GR\!YIOۛϯv}ڤᶖ<rhU LԿq)AT&셉45b&<2Fq+5Nׅ1na|>իЖ_z͖&끫W])dmo_;s;Gbʤ'O;7;?&y?G?BmԕqQUW$je]qzz16?2A]!xϱdUt=X]&DI 7+o|Y'|ƛB,o}' ?~K+*4?[ϠeE2&tc8~ i701ڹDuUl?'o3.Md;-lH3{f*3&|p<6!Kc0% #7XGUӤc"+ZلD\ /9^IKcm(Ŝ(%ijjϭm^VAS'M}g7?noC]_ g>>?gqmbvZb1G髝@Kz7I\1APAx*EflK8tT I"de`ȿ}n' qgPGozCs:ᎁDzPT ~P`ےVPcm%hE{ςO`8G;k@Ӝ6#Q>GO}ts(A $i.eܠ״vāSU3|*q]w"05"f`/`5rpY#Mika֖Vݥ5ZgY0'{b0Apʖ+`5]+0i5Yd :qt١WxVhM`, &aѾN^u:eǴ~p-+Eɇ-#){ ` ( X.#p\:}+7>F]`ᴠ''ӢN6˭Xm"(l㿇|D7Bd +ۡd *UV%8K2d8BlHȘj,)9|d}پxGU [IU$ݤ!0%K1cV 6˨Ҍa&JDu JH"DS01W:)Ocba^A*Ta 0\RL`kB'Н181GYM'Xz~e0 1YغձV䱉#p,9Mb cЌrC*+H p)qd-&\FXĹ2)VKHDAJ*+1Qz=Sg(I1M(,:wP)ia̪q=ǙV$ )iX1Sz0 ̗" 55Vq$".U`c,-iEK~@ EL(Ě7,X谤x^e\(u=ibNtd1q$MB`1HJJXmasb*2i-ac4ӏzeCKc֤jcS>{k8;Ǩ?ΜTwO5REokL/zxѬ/^TEK/6xTnTΥW=nﲰ9K={sAGKK|=T@)z?-va`wsp|:gU ib5,,!Zj`%\?}Bu ա'-Q3] 'qR&Bia %+waPs.,Eq* <%91"a)qw*E+V]WRԊlunO e86ߵnXL r؂Xq8M]#R&JaiJ*EYK(oČ+&:樄AJNӄIEgSE,UMyb`"X.*"[w>fWzֶ/'kGXꘀjVT)S.Xd6U>Ì `L6Qj?߬*F1E0G(%E\wI.4Pm"!TD*N2S,|:WC͘C5Qb?A{3r8ߙ`B~ƧoO !FmWpNN]5gWoG#Cɽ՞Ha=d?8-@#|% $FPitQVni҈}{6XgJ 1Ъ41lY( l3D/ Ln\7Nl{00WX3C>ٯA7y$w;Eb@uVt껄_%ނ*盦r2̏}eܓ6 ׃ώ<4|/ټ0g s}iVcַTv$ ڝ_:e *5UCᬰIey vǯOa]n};?Q.k/nAkYTkQ[%rdfQY۵ؗQaȉZln'5Y0})\+^]1˦tu|s|K8 p=^\A+4 &a 剉B!aA0cV9KQ.Yp455t^ t>q5ϓp;IcnxmcXaM"_dIƸ+q**+fN{Hbޯm, fQf9.xBXds%1û?-Ζ7$Ϥ< *Etr~;nʈ[0C)2x[3bR+t5קVݦ%uxjO]ˠhЍ%9E4ȓ;KwA~8hmtwաyeX^i ԦBSM#):"'0GX8z9zLB5GL0wּ.I2Â:f^ EM,nmF(W3 z\=͋>17)z PKD)q ]񰠎bsX c0v^NL:7"o5vi0e x:f.KA%|goMZGv=-HݼEb/019M:AD>咀ԉ~kP-nӡvJ A m79𙷟ϡ;`> ~JQ|>l5RT_> /SD38Ẍ́TK&(a{GWѣ3+lΐ:zL=mHOUnCqhR;{6[F{O"*{4X,{FHcǷ޳y.{S 72< g=̧3Ta&"ĵT}صa]kKv)-rص&!v1T+{>h%d<ȈomΗ ]yw6*4i @n]omI޾ԛG͍$*qwnɖÖK!K<0op(&y}):ߍ`( %aj)\ϰ>)Ō]WhMW)3w lgcLc F ;̤՞zw'>|BiVZQpa8VU2t<'d㆑CRkaG.(㳾+|{Ͼ+\81w`֢|6/5i+5hg玃Q?^A9[L'y_?];7iz!&0Έȃz?o; bu@ؒFa ɣaF&|W #Z'`+m~D޾Zeo^O&,@9bkZ/ )Qñrv::'#c{"V@ (#zxytGWkyf I6Vڨ5ti-"G ਇt{.D5dL[8_mp]m4毒M`D>Юr*U 8Y=͕Di_U-hib\}JYRVah|'8C~ 8ĭN)rF{|3@G-NefRhmt''?=4zʎ_ GY~H7k)/ӺhA(PPɀmgӛC<&HS P_lY JM=;$4 `.̴s%-N3j^Aszf:\eO~z%j}w`PqKvD!O_;{ٛW/^_myҠzTG B/NxbߝY.YхyxǟOʕTV] l>I2P.1&?BY"BlX(Wb54\=9)oB (m Wg:cuvJ&7 [mjҴi;N#8I-i_ ef=(? Lew( jwL YB6YLl`b8s2hqKn[yIr$̍g,+ewܮv#p"i'ɃA``O#4;p ;Ao% 0*2=8w|T)\nCJd=z}kŤ !y˕ROƸЕ܁7s#-3W'W#]ew2Qᅬx8}]oe*ky3W 87M؊G^3 f<\ x,8OV߂{(=\(njE]sTDF@d)+" euݣ~T1wYMsgV)(i ENn)QN[Ǔy\gIkoA[^ Ws e,E}T~p4h HSa+9<Ϧ^!iv%w;im{ ϡ$Z-ګ )蟭,(IwC҈ү ֛2`^~Av}Ԇ.|P}|_[L 21a8~/M,,L$ޓ%Rڝ ;dnZrQ fBiY0w:LدihGO9{GLb;.{+]F5naA0N+K `*$q;n?խ'|ݻjo;R)6QǦ'd+ߠ]46>'&p"DǤ~[9꺮1]R:s^<{2Ӷ۶-fnVC$wqKjrq׷"«RLe5%)HyxM-FfO>zC(gfAs陥_kld}NW9r, kZGga HkvjAti9XcO:$|`; n"ΜNZfEآSQϾ/alݍ܁s*{y!z/K2mڧHuzhXEDT؞-WT[4JV푓Ŝ,"S%Kb9 Q(K[8WB55w~n8@Y],D>twNLkVQ$[ߥ+]&}PqR kS*VSimea|^ŰbW1U *}þa_Ű_[ y]:6c%Y:W0^!F꽵my \fp^.ɳܵvMJPI@H 6_n@H;{N%"k.E@-MylqeV(Bߊ`,@ѽE㌵xh9M(hpb/rṮ]=ЏxA ;M]8r84}hMzg53\8ր; 5Lo ݿ+A5d$sO.MԾ0̚)Hoa9=lšLZEsc*b_ n.sn8: wqќvXd&[]~ S +>ۡܡݡWU6yg ]Sd,36bWoc']o3(X>͘؎,&39!fV}a 7!Ro`#=СO[fH(cYM o g+RFiFi_)ȝ*D'=vE̤-CI}S'T>\S>:qD8 |'Ez!so[ŠD{|wVi^9p6i$hmg~}E,.(~tUs#%8zBϳZO5c;c&Ә0m!ȶp`a %Ùf]\؄z|{iJ/8 m\wb2-‹b?dQ#}*cIAl' P/8\<y1E[1+ʓ#a*+{$,P 1ri[L3\ p!XDd XMT2V=1_Rv=U P?v=Dg1 RJrjzJesrŊؔIJ*I$m\VبUbb*{>ngX'/SYkEc"ۘXH7%.P靏T*_1.6-!\rڭfmT3,ؑ{[JJ3j8nD6hR&x oϪ`#>_Ƣ F0ŔVd+C^OyLbyC`0m{J VZWmL7bݢ`.42o|M}6;ۥBKInq(o +t F \0`KSgX,eb!M#)[rݘ^w[gYX[8{2Sd5YqX82a!kR ]\ YSBҍ,rC.tckJћv`fi3. zPͲh ulcy<^Z$4j|#2Mɤ\7 HT=h6 ؘѐFcl1:+~B k}#6axn{:b=Px gu3)ul½Y懐UCΣU1Z37GUO 'i*N3 fr[qOVL*١;g4WؘpY>6_n,{DW):|:,w nLۘƄml(|cF[E1j5@5>k3*7%'8 ٙyfaR==)oq)g`rb:=ܓ䩓ww ųNM{:'4q;R<z u NVpMQa>6`yV;^meyR0Zf܁]c: pCnz AhVX%ȵ|WQ(jcaLGekW潣9kAVnccI<5ԣq{'T+ҽ]O4E=Վ.S q5#S'`ֶ9; #$G)cv2GE0 Gh7Leny>#ߎ߾erf. ?ȡV_~aPuTHI,0&3Ψ{ӫVO6[B)24ƷWd7hu2?ZJeǡQ ua mqq8l<I[$sI^ u2<6Pa/~츑bhN'Ǖ{x9H@F +qOuA`. : 4M(\r.hq:vG.S.@`bjV?hef$W83C M\v<k~=.8N䠷in%Hi2C!#BFL "gx= g@^͚ҏ?uΨcl3;tV;uJ"0ҭ`b5{9 gM3Tcp2a;JQQvZ96?!#$*3<wG:I،Pi;X#ꨝ t=]dmP]^tmpX! m0ИYxB#.-!8d9M.,p8CųD*$9w"njY'@VwUwWtg_TEF/Y>!RT>σ[)j^/Vӫt#D 6o ɚ>aK&[F`G{Bu̿yS}|mNFrr2p/eN+Ah]}^zz7z8ǜG]0k]**Ms%8#y`oϤoq0N6N BgDJ͉ G~xs /.?Ϸ^_\| B_/޾,_@:ɮ)çaL`c8К[ Ъ$׵[kc܍ä33;K[ ~(|7\&;"5wVq/d{E Ҫ|YqC| 8Ƃx0A@()ɊM-~WW|0Q)p G/4_+xn'e I,f.%K|H'`3$"*ã[[HXPya9dgZSvi.Vdh 6@uFs=,}땠1\'c YZ#ZG 3h]ވ4̝VCvLw:p;p68 ~N)SD30z`2 W@7ɤ֜}m\)Y7 ]4 ~ )"rZ#α@.jk#o2Oد'/g-Ӛ o׭mu`s F8n'N߁6Z=芔x}W#÷(ĭVkoXOO/qPF(CԭPF]ux^d^/ 5E9_m,0XH?_g#_ӷ >1^'0e 4gXI|ʁ]ɐ kw:|9R,,|wgASg=1_gU~]YUZ|;sn'\7~K*0b*pFޫ|Y^FZ}r|gq8o#o,_)_]M9e2[bhܷMcmہӔ΀Lu: T 0} E"5 O{lΠDuSAыEP N0:z&|BIS3/$"xbA R0cb)P q/pIk& #ϓ1=ZZlȫv$&MmZ: }&ґxj[mH}-Hc.H26yqA>vqhJ.H mMW6|M>x&kosCv}Oc]70[_?r)xOo7-zNд{`O)(}"0ZxzD}-Ӱ۾sD.H"Cb+Uw_&"Vn@W+|2ȜшبD<.諄_&Lg66&f*(c%p3VcP QQYo,DS)ůq,D?UZv>=(GLv\9lhD_\!s P PQ^tAvT_3O7S x"7lu(vq4g3*=D*xv>G>FJ7>0GKtϮq5:HCP a:+> {;߹x?*cߓ?PP%tMY;r%X==ǠWy/z3HToQ>VLۯ^(8"v\Uc dǒƒ jqmJo cB(%?ԭI\r=GU΂&(As(:7Jp䵦VDC}Qx(܌x`З6n,m|om]ՑuaJ\mbo9hw{3On.S?TW A M j@A$D6̿ګ[t"pZ}gEQG@rwHhL2coAJNGnfYD_)AcB1\y)7aqZYguT7mf %66Ĕf ŕQcsutlE-c70_AIFthO FZ*(1'@LĎ3ffez:)8wW  +ԈGːcԂTE#Bx1V=)JfK9ڵ 9@fnVsR ?~(Vi֫B1_ n}Pv+0ecosm%E9aZZ#b{dr3au2$ P(*PUyTtL!!`95kJ40߂B )^)S I!j\*470T8&’m(a^i\'$R3\ȼ+| %gRy.y#sŠ f %'˛2ÛY)g~'8Aj$. }{#?YGN;nHDǛ-1}Pyf,fI`MʘĉL<&p"YF&!'@rB)TaFQpFD ,X[P(EL(j_ 9Rf!4XP(¼aj*XB%ab i81RІoCFҦN lRúqSֶ"51߂B ƸtMT:d`º%(.'0(q ηPyB躠;Lo]c&em.h6@F0P[){Fr j!zh\vu'+ iѕzM:<E>y2vqf[_㸫d\۾^OAmٯ/U1HTcۋ1\9C pF%R 69k5I|p;Xu+{]e6\_۸1BY]_)-4ۙ'a;{i fmjHkLz|3hBoDZz{"`/L4Nr!UV6z΢uFc Jt aPϙ =+uFe7&$xJշoo^WR[=Soj/vGL\6ʙ->N{}0Р}0_*Fk_W;OGӻi|סdO77sP)W&}4~V5>Q%! Pbr*%bdF/3čZh"Ե&u]2\JKF׌\ \*vti[q!Fhg^zsE cV۬3BQo ZkγTJsK!#ǔ(V۬3B LjC+C1rS 39)GqŝuFeQ)tpEl`hs0Iy8jVS3B'7 G#u΄FJ0Wp[g:/c{eIƏKoxR Gy{K^˕dȱ&  J9X Gj&Jς1RHy*&uh]{4Hqs u\uiXTxG$pT6:&(fEwZI%PD0@AP(@2AVRX  ;!ĺ#;:@m"*xoJP3 3@01٦a[&?-ߗhZ ^f1E@M#D3ÅX-mȩDBl ^BZV@^*EUJR}L^qo0~+&ui;6ctޛ|kԥRU4h/MN_dgM~ YuzufA2`Ggvj D{L9HN,I`C$H#Lݮ ¹m_wC[ճ_%P0 *:mqkBK¥CR%J' طkg۝̈́A3,x?W] Y{E3)zS5^yoo)ǚHް=uk)__M%,;9q ͩF|]YS;+gL>ɋhxV5*fׁL9 `V'[ʽz3N m~mSвZ:g:_6 Y> gKE Z:݂?]LRi_7z0u޴8|qoEs𤉉jxwX틃33RTjEJ WbWl \5w C%J>w t3AqA |‡jI ōUmvxPy}-jy7 vA3mѧ7/8e_#tgexPQ=#(wR-uU ?6chi,z9^PB= Oұ,XQ1bo {s^;߄'~i-0K7'42-)`#=jކTn)RN\q]"ANKUFCMIy?#>f:^>@܆/ Zp~7PSM^ (P"`<ő9K# (8J?(+ =X"(EAef)\J4E!`0m+meĖhIpkCJb #Se;Q!(8v(8*Op:Ym଑~zv]<Аšb2>A1^%6&In堊ciͩ)B36*ȖRH(u A1 C& v3 1]!҇ͽ[ 4yOH :2VDj͔P#EƤ{fD4V\ WC iu`*jHY_jI$JJ 2>ZDFyP[{@4q6={侹I-qqg &͵{λ&|XO5o9Sp ˸^X%ⶴöqHn!%$~XJ<@4kWR&wt]+aX])`p0ៗo5y$F9}3ʤ Aw~|7vv8Yejvӿ?~qJ5O6<><8=:/Sco4 n.4KV)^y{;\W_9^=^~ s< 6^~&\Jw^is`ڦuƸhL2#E Jx=΍\9٢żDTvc ()< We , .6ZS~yӋԥuU+w*fz]% vC#߂i`Hl!- Az@E{WSq|z? * ;)(gp`ӕQ {ҾN NQovs?Cp )X+t iޜ]`Y5PseҼ ٙTCS0K41N`f Pu$w 2T53EH gRxS(ͤ W 1EVZ }vV{r5Z7{R:nˆ>8ۓ9ҮE4ƛXrsLFZ%OgʤX38<+Azw|M'OnKugoU(3Ǫs{HE9YEZܸ5s~?I0ۣjȔ_qt2zRKvEY7Věg7afCٽRB'aH́@2G)Πnre9gh 6Ÿf* pj ި]^bKv_2γgM嗝"'w'ig_ :̫Eb,ESh|NuIFV-X$6guCXL?`ko `t=NW˜&a*_븘7޻{7NoIgl(yenl\.HJ< XP{˻Lj+ȱriUd&qW19o <_D"k* coZ \wLO+ x>Pη>e>.LV5l~o}jÇ/U3|+"JG;ث710VS5g,*Hӯ>֒-AT?Gݻb9cV@''s4͙ a~ܻpssO5S^3޷)-'.u~K0'=) 4dD^cE.ӟӋ^hNYv@a1)gt;bLLjous/R* [I<&~v<"cGŇrڊ]z 1H.IVm`H1f*qq4htM VLBX1OrԻ,!ޓ5PjK"ZXZC PDLkrDS7h\utKƩHY6O,e_8b&h{ɾYU_4Z&e —cdT(4zV֔{#8zըyx~vX<3#73YfFA9U ^@U^e\(^Xp*|NmO.}:v`3FAЗ(N~ ԊρO G~a2k X50LNdǛS9 gy QMInLчv}!P .805KzM^5LlPHIR'm<̀ x}V\PdgZLu䅽 sրݤHJtpԗ??|̼g$?)]Bm[#-'.[z@P]؏0FAptJxD 6}>!wGe˧{wquyw_8_G_:nƨ<¨a 7aSnj4#K|;=`sh^Hxfn5iFNE->QyDGS)!`kJ.Mq }\$EÑ4`jup*O}랚ѹ^D=gqFHTc~̤4ZѶ̜J2Wϣ6@\ =3.e@ֺf-TJ ř(E2gB!J8M>C'T*s *ܡكo1.n,;.3ԒrKeQ>]g.% ?|hs3VL_DvrտRb4 C?`Xs %Dl!oGx" W{qY}DT`]6Kᑴa+xe.IL1֝A.1}NV/SdOArMˁP:hhE" brwJ*s Uc:>ޏTƨ~}FEEPT(|V yQHXݓ= AtPIJ8ߕ/b<Ѕ:^Fƀ<\9_Jn`2ZB98l9 m{p;-<U.w&Aģz7'2=#j/{hCX~m>xVKTl-jnF͌.#Z뚭?r/f?a!*كVꬱJ!󊨝DO ,ǭNwjy*#Uxu(cUDC bf5*bz+Cp Z 3s9 { :Es4誚BiQ>GzC%a2[Gn ޡ4z(un]8w(T?p#!s]7-@A PD^ '_~f軈~#onHM8zF/(_\y?&nVa,QaiR_ ZͰ__=<dz4{gtgeS3 {J1KMii=}9F|ʽ! 6e2'2bj¼%fN.t$:bRP)?k 7]% oZbתL:nR2Dkw`,v8w^rW=拱_ySX&AGge$`^ZѻTsTsTsTsTsTsTsTs|*7WNuHĵ2t&_4 yhB©J8IƨII% %;jO/0؃)܍ u*eF8 (SJVB\k&A˨Ϳ 66s=;I*w1y/^Ms9JbZQՅm~~ӷtfv(e꼡(B]mXUm'b(L"sgF`"KԿ/%R-?{t lm..bb7!@u{@R7ȇ8`'ls w]|rÒP:DeH{yB0O[͕Tl+p(+# hҒ߽|h3WI89+l+,VlF[|'fl" 8?ܠ4nlOv[C$,ͭy 3K\QMQ!9/0xR@ e{ \3Gǝ>8tzu(2ED3.bGvnz7Z4\3ߵ{t:&1tlr@+}{1*}?i˩M tYծ+[? &EOaC{ؓrni b0(Iޫϼ_So=cE6Nɽ_D4>,O=PPҋ@,ޝR f^'fH ^^[jtܿWԮeփGiÚaũ`;cny_x;'E4PW-5)T~\p5+~~_Y`Z\kw5\s}Hb6*'BD"F8} "ɜ2Irp:O"*6O' s0 p&0hrujb0l^*5iQ}ۼ5#.+wdISױiǖml1x*7ߖ7YZMWU/]1AS.L8&Qe4)TgjzKOo[tȌ"B}*N19'g0+sSPkCrq+IO2.|F] J4k Wzc$\g\s tf U8KɊ4;חiyK.q܇z4q'\$˴ʥl&\k-XiZkS͙tWd%%Mf ;I="7zr) eLJ ŮP0skE3TR*[ u86B,' :HL+*YAWs9U5uxa4+|^.,WJ.c8W^'͌9,d܄JJmpXa5܂ږs[$c'*lROem\mkPk1hWkN Q{SM 0$l0+N *Y Βd4k.<0_8ovm߃ﭬ6}2.zr' 'ܿQHJo B"|9xb={uxo_v},8*3CO;8d;!q`\{S<( ء&a[Ϻa:%PC~\ownypykyX~sLN WdCN$_C:tVwmsm>@ș^VO#tr5$:eI$'iO]E|R&,.T+ WVAxZ|~&FeM(?ς3B|ad,Ny޹M,6ѴX$)7痊h:L[aPvr޾idDiٽ7y_^VQʩ?DY8[]N {`} a迷߿U9~|+ _jÜj]BVB<bOF*a>{@J22IGT4#3I媀sٍoVPg)> ϋp),"3}܄ * H+{KoRl_yXR+W&O ;y&},F *+`˄ef@`] jLqٸȴQ0@rVtTRlR~e/,sY,P*~x` W`t1$:eO >xD*+;id(LW0G@YU2"~q /v|пЙ 9^QB&^sa)h#Dd]G@{7 wԕs"?|!6*7Z%\xQ 9HӥjtOK*qLBtiE)tGM$J=b>\ĭ-Tyd㼠: ǰ kl@y "ʶ=Ia`H@*^j2P{Au GK *{DRCG#6>O_\}[mn^z@^]tb?Gro\һvśӟ%KmB:%E.>L1@}{A85Srt50 Niӳ5$ |}S'lj"A ; 9448F3}s*m0U7F\S=1u _lP@#}}$__:7WRk0M,N}l agsAs&' d#fN$ zP}y"} s#Z:tLn3A1lp0 >p\' !v]d9Xc1{\(0WVEem#,)o8$@ ୿om]]lu+)4zcW`|ʸk0أft>ϵml{Ӻ^]O[o:UX5vtmMkXL7u-ǵ5d8=2x>{:wչ鞵tY.?קݟZׇBs=t޳םޜӝ?U͈$ Ь(D G@a` Lq0/-W/J_"n3:z pD]M,\t@\5?Enb0N1 ƽhnU"mQƘhv+uhR5q+ AeRJl42 3.kuLTS&/n>\ r4 D&9K'2< x6͉m$,IoMy|G`=vrd\>*&>٦` Qa棩 }kɻp?hn޴EyT}1nlWW~6&'Td([6A\ O0[u&11vtyJzqpS Sad05dxmp4eeJ`.xrT)ݹ 7Y!60z鄅1Uxt|t._U:?G 4KEEC1ErJW$><:ή;"g -w|@dø㞷~<=fcnuݺEWt+؟K^SL:j&cΎE l l$$~.]Ũ]#k?f6Z=&ȠAxa: KypWϹ\eG䬗мm#g4P(:#Z\փUN l? C0l5( G4 cxX< pݰ ж:qG-zdlhj>UAUN2|vbbE 0`&zHk!fϕzD\MC4Q;? ,N5 )9 Ifn"M$$/9{AЕ>~`҂wi% &6O0:l&a͞cIF_h*>Sg FKu`j/{(ʴHXH|Ţ5e`-qm԰<"]M5ƾx<8I9:g4cQeCh98DfC1禍$E^ `妐0e͞oG.^Mm-i2:yX++] X`uJWlKx8їT]Ki),ԑt:C4+WMxS]L~s>iқrWVOɶ c>y"UlqFuwP̩cJ9S9ġu' >wt6 a!TSúotq*o;`7Z>eSdnlʻ˦iʛ3*qr.S˔2E&/z46N"0xP/O&y4ڑY#`"?UQ@OtӖDt%S4\…DͻIޙ7);V_ -4;# kO)PoP㰕"+ܨq@hQxe׀TS njeNhZ:T#&%6/s;5NSG VLfT(=1זQ}͔=e]<59DYU>?C]3SeMڬ^d^02ݝJ@&.nԯ|cYOoJ1Ӛlk %K/74.7":GӟEdoǖAdmd hޑ,noa1ғAT`<^mvbDVvGD+'v{ek.wW)*j]dN5tb',& #էK4O,ۿ*p}乾y*EW*MGtr,MwqgVXh̿Uaj?7 Yք:l e(e|C2w2t=W?b |vLnJ&bvڥh"> eGs٠||ח%[JA/-C꣇ra> ̒.k䦑HR@7'-OZ.o4]%9c^2OG'5Ӵ"|%Uf3j~xduELR'\m`P /\2 I(3.{ZdU}E_˥>¬lÓb@905iNފ'F?Q?O>ځNr*6{:;=EJz&q؝maݰ-FCo~4݆N*Ga1 tKAEL| [O(Fʚx<6Jq[]>=8ɓ&:)K_s%P2K-5-RZ/|Ȭ75!߮[[>C$ǃXuVjv] 0?a^˖5?89yAn}9A3k"f 4ζ·>hӺ|y+6/YⓂs uݮy59q:0~㑦U&&c7! #.D53n5ƙ~o̶i\̷Lx֕֜>z1hX96`mԂӪQMs@D7^êk7y9ĵ7aY.W߷4!p&qNfeabH_Aٖ\W n -2B @NӮ%]K&'=m$_f4]efhhfQ=]ݶ764d_Uuϩs͢¥Pgლ\v^7*.jo>_e)0?Pg+@]i_ TP`-%Jz5Bk< ' fؿ܊}L%bG@mFG+=!XgG41ot%h8Ⱦ'Ã^ @Dw>zD5ňn%LO_)_z'=Hnp'-*zsə647p4Z:á[dxz>Œr7<P9=Fr6`,rw2os ' AHRVa ʉeZ30F﵌FMFSZ;y7ߙ]Ak &H k.(qCvY@iI(Iy{+%R 3,v ~H"I @q#Kya_ֈx]ӹk B] w|] w4Giu!.߅ Tz_@<+ pbuJ_nyK$yݦiK''S o G>(\)C(w"Wjޫy51\>. &z e}5%]f% ?m6[*wll!9L˙ DafiCn)Jxm:3KbҬ~*"pTABo& ('L>z]j~vpog`po^dNٔu.o*yV}pzƮ7)ar>*o<?gUk_cϖtr6fWlgQ`"uSn4 }4\+Lr"ϵ2g8\`siT7ɏiDbF$lD='֞KG9فh-Xnˇ}Ҍa¬4T;"@cFcSK ̕[ƣ%E&b_64'C(6^ԖiJWhlhu`*jHY_jF[%%p[NL-"H#ׄ\X*Hv#7*YJGր ^}{,ϗB1",DDꥦ0+&RB"Z19ԗI|p6]Ly%Dm9VLiskv}vmzi)ke2esR Vp/AK+XebW{S"SBӨzcѠ7IrٜP!%TŘdF}x ^Cg4dtٛ*e fvkF8(;ص0>;nM|0Adǝ?sV,keIf]S4dI6'-1ǫۄtOr8Y2Wދ\ qY)Lmy6gbjS.`ip_rݬ׹&@,(0|S%Rv\l TB ˩=B@K,j1h]_t ' r'jRE7GiCđZҳm*"(d^!G6N3ԛ:Ҕf(#HitL*"Q* J3)M!* D BLX!%K,'ua ja'V13P}CTa^'a? o_,qсhf1ʞ$ ASO]B*u*LOkˋWƧwUY[$FqaCgt( |0{k|R/Ëi/SSx/!SŢ3}@4~pf{}_y(bdOɯC>Ai7qx p:ޫ VE-6`MHtH0]l se89NI >3,/*;3%)(U{%:<< /X3@N1}S0dKSlQ9ȉT^ttUgߪ;o' GMf`Vz'QV w牉'G~3C!4R_:G4 CqEdDC1,@0A1Iaѿ=&w-7/G4jZ%ryA 59_#>}>ǜٰ ͏R*T}WT3;;t񛣟޾I~x}?=~1&ǣo_ 0 ~Sm &pk[hEƫYвUOƸ%7{|sX=7%Q9{wuꍫsGuS3R{.58 3_2T8YxlVnKH~00o L~XӾ xѾ>M/qt|cQڝ "!?n%;+à z0)E|ߜ dƧ g?mұwM`s JHPZt~ИFn1;=#5KRmlA߯p^Z6P3F`N@pF%`VY>2el`ĻT+qmrNpSYL*YTMAMP/|ѳCu.sY*B\SjbN @ύk ~&Wk`RGSJY?-rm| Dߒ̯Sݑfo=$gevӓTdn]pB-`k qb+t@ؤ )Oc.*52c&= \̲nn07/ 6d)ܑҘ/OC՚ץg>~];qݛri뫽T|)E‘Eq7EAS2?mOe~X@fKU֠,U)Fa)B!"d;TQ&,0ݙiJE4[r%]EoJtBw6h.M[?[Ƣc\'8iTK &Ŝ9s˭# 2Ά{Q6my%f`k.PHVƦ3R9ԫR#8ZI}gcƸ`rY&]{qZ&y_X_Z0t?v[) 5$|g8.ĸB](`P»UehU!R?uԨ|uу¬ZJ[*}Т^),kK+rPy+Fzֆj9)4#23L |yݤ,LP~gβ0߁idK;hR6Z F›*NXou!7[q- @XԦ.@Ja28 :p$ntR8ؤ.ft 1+KBjZł pLx6]t$?bXPGbm3Y,+ iW$!)T<l42:J:XSJJ#ϽoǺ]纐2­̂AXo3*АkH FN  ߁.^a 9TK˸ZX7wQRmVÄ+\MYl&;Dt*\ \vc<,n!;KhL?u5z_#Y)Q@ZiMd{5I.FS T,̀T+qj0PC)0i_EPoѢkf?nZe[垣?Y77t3lm6F˳"X.-88z?rQyQ)eh/^z$!#xP7ށ л)!^ OX`Fws wka] {or+績&j>ǝ[e!PZiΌZBB,DdbA崒XI׈7,y?\|X\mg;z8Vp\0}$ytay|n@βE]W;~'yѧ^J޽>ճDn%-϶(N̎wDBE~ ispN}buZ޽qEAPo͢C2-ӊ[wmzxwcԢ祖!`nOx46 _zgk6%c$%{xu|"H1!u Yp%&9$C Pd r@vX[;ӫ݅vyi®}/X|ԧQkƦ}9Mz߾DYeQn\ܜæ|~o. ;8OX/̧a]计`-bw˪^pQ.O!%sdFy>\󬣄6̂B2W7n{u턭tQ{m]Cp=] U]#jKrk!^Yoq)W=1ӇQQ(2#/j*n6fft#?dq<+~lQn,'KP Z^3y.>xS`z2P3O YbWͷwv\:*A̽),~N`(d&oo~}rGWAZAypw!U&N ҄2R\  ѩu` u*) #2?YI7.kj].I2Jg.wY3~W  ɯ^!|RmVRnPՔ~Í&Px:iw76A6H@Ɏ?֕w5f*b$K8% T+R^cou>H5bJ!˜*pj@cPJ!qHk"8.Z|Z|Mv_m*]ŧ޺oNκM°l>DnMG"VbXu{ߔ(yɎ"H$ 1,Ja! 4LH - :-'d"Ù! I"9.qgKq4"KcdξU/jɇ.jG ǥmt%=֭4›-pund㼴ɼLF_̛uf]3Λ,q]YT<Z u&vb)AVtEzyOSpE7XC&;zl2b[AyT~ ĥS֧|nDrm)NQU&FSsu:07HCSJT8Ud)H3f#D:nV@4S.dgxy2 Btūv?kxeםg)uθgS}.Խ5;/SS lɂd2ϯNKg OU@c,XPji΁0]q6sU .S5z>S i U^a[#: HzKUe`FA/` ##q S CDZ"3E:`z He\-($")$j6r/L!a~B?P:s4і+Յ=D2dL  c LA, rl1ޜEc!1r*c (iQҢSz-aGk-"ڳcێ#~Gk |lg_Ӳ:5rr*KiJ2Jf TMe1Ͻswԓa=]S=9ҩporG/{gK ӉW0! <($RR,q &YBq+aPBcL-,Xg\BI5g&7تKfNSuvR=b*?XMtY?P24Z b+I[G,ڐaYVzT~]tY{QdBT aӂfJCǵur<#6a,pkd]2M%^f#5Z^ޞ,3{9.';. Q ICȽDX2Lxw n_y,*\.sr ';0MX幅 (v4@ʹ  UFk`3$c08Jgaql"jkj#HbA bॉPI("&xp (p6cQ'YÕkM75Y ie>F|/v^|?/pV>h x7U 5o!ƙ_6Mx9 hR~2 9eP IA@7SpsnQyn&{xcG2 "*X$paD. m;y Bq1?OTH }M|TJhQ,oUzv0?J$u21ZW?%#o~gOfTPLVTN5O~*{>xl=' {pFûOb4KAd<0/!췘ܑOhzo 6554fY>xxR'?a.>Ο9η 6o֦r2^L=y23&170'kSٶN_ΆqbOw?~}[}w~yB즉~8 ?q^u}^ռQW⽲!7`Ɵ™Ŗ[_~7~&`w>< AOf΂gIVœa Ni3YBbi LFxV^`k_ _lI;klt\AXӠ C~#~d,cBn`,{f& F2cdHʶo/cvɞi6kjUuuuPQ=#(wR6^0;ǘX(X\s:@OhXPИFn1s@w>.&вi-0H77;42-)#Y)o=5)ҘL,%E)ŢG+OA]D;dzz_W5V++m ZU~oZyTRPXKS 5O7U  ,P,B!-[g?Sq(q=U(@f PTIBtVFlLAQLP0T+6^,yA)ıvZ4p*OpØ1r\G?A.@EXXL'[ba"Z[P8NkNՀWQs]t!H֔iq\c1u <6F}ȔB\8ӊXHnA$őp.uX)e:Lǵ) ! Vl~qM{ uoݠ_τOq"md&2V&ŗmѢ!{ K9_jL3n!4wEịU1Z2( F刊`) NPj'Zc5j? )kChv6Y_T]l69\R);qϳoB-+%+84q3bRc6(SX8S90UA{b:G0Bc> ]n;Dl܎y +2[ D -O~|490)>{M\#g &uD4cAD;˜̐ȷZu 9*\٧ڏ{3V+!G\9KrNszcBRzo;Ђ]CmBguÞvK2 I OpO  /(}ӌp`pR ~LޏADO0j 9.So}tS%eQ,} xtF _"g Y \}]cP?Xv 8 83 6>ϊ3|=upo';ѩn:?TO-Nc~5ۙ*&7o~S$W7;l\ƮN5~i< G3Rnj#UW%?^>_0ҥ(BqXtMnP.f`rHx:X/+{,Xߚy6Dn&&VQRDie@̇ b"*5j$>GnDn%m3ܛ}gO& dK k-{x8Gb!GbM,wu'BbUxZBBو(2:R<> ;>窓LJJwY ~>!cD‭iU"bK(F[ :aFhNP&sktf׬Ŷ ϲҠmv|Cx }o+ۋoI$!Jbjk4iuNww~م+U! |?HN7"o bj*5l~w?n4 ~/{6_ U- +2_ݷ;0`AA5-W);3܏ ݜ#$;8]A~?NWSz"0԰A * L}S2A\jP=4L Qp(H$jec̣c'e$ !<{ջU;^)Dr+ ? 3͕w&dw߾i%H+EZ (W: `x)ICE,' {"1sj`90HkB:QJxup{gYc)(Bs b5XHB-@0\ҚUD-X\f&숽sC/=rC/=^]@c{z{E#~{ǎ9<-$j0t-`M/Kfwfr顷wЛ][9.,m轈>zx Ӆ-,gY糬,|u>:eBaEl}a>:C|u>:c|u>:eゲ3q@`1X1 {M-" `4IK 1a@:!rF U`ha)1H[gӚۜiIgd =!"}ʻ2E4 DR%nPŜ(aH3™= Lj׻"1rNj6[쭟SӍɇ/)hu!/ņtPkjoqȎlNU;,Ek?Xb;,;Ώ  }Q0EA3XR?p2Drި'iCH9DŽRn&x9dKGEdQ0A[R.k%OƌE|~o#߶r\6ԝAߟ贁P%3UӭtJ0Zԍ,,qk%rnK72vt FX؍LhEX{pU0@yPX*V0fdaRE0#@]\ t@[J}F+| x$ )[X1c2b=6MVHKē6W9=n6ۅ(- v2auaÆn I+;c.}ԵpRiz߭7[I&D]14ߑMH~P7@bN7'`tOj遝Igc'z]0E-Of=y10 7y-~I=wL uI\x ќ>tw n 9eͱ;i=jB2q` %}΃84, <,?pn i]o9t) 'rsIӠѠ `56RGָb8|~(u [{[o;cj_r/qF,B0#BXYL^jʈhA #(H8H7Xy_KvO03}jSXh{)bT>?t'HIKvh[V?o_,ز7Yo.RoNa]YAV>s9ϓ10~46MȎZ0bwu3E)Uivd<^;k̊|Yg ݠWgV'Vyb(#Y^vLDz_ՄItQOrMMtq}"MظS#[Kr+)lW:xE!U͘o+=pt|d.`W>w,n2O:V>xdڵiUGl2+Vƕu=ÞUt),;Bghd\؞ x楃9bP88OyJ4}zyBn;upܰz?kqh__t) L%ݏ+san*4Ӎv(n"rġQ1.D`ThG1cܯs:δ6aDŒ޹M,xEGG oILHnI{⠳NmDU#ǰ&cؒ ؑ_+Uwq>H#*ǎ R0mh/WH*9l ,ɼ1GgA;8 ڝr4n`-\OYV d6،W,< s)lw֦&H k.(7 qfA%*JkB7>mWsmU[5)- %Z|Gm7 ,3AQ⇲v=fqVm6OlX߬nRl0vx3z$ۀ*#hd@aBD%$@tY$) Ӂi]IՄ X=^iXBg 18S5;$( r|-(!\0G1MhأlNֱc/HǴ (QHLd$&E:łٻ8dUJI3ָ$`1cǻ.h4Ȉ_DFdbIr,a#u*Ǻ:4ӴNRӞ=R> oZJ/K`cQ0 {z8B~NQSXY2%| +U"vkZ_tuooA|mnqԴՎu(c0 ?/ފu*Qܞl34*kc&PfTetۨQ1,%! j@ۢCNfFQذh5d%[?eQ9>F68J11sAA'7<%?s ~`hhTx +ϿbNcVj߹+H3oKA#.5Ԇ2xgPPJ`0vAvJJQbx/58~@U=JE!`\G+y:i tPI{rgǟwW׆Cy&Aͳ'S>?O/eoQ=X,[a:x=8Oc|0\Jh\^q=]$'0NF?S/Ik`F/S]ڕ>Nu;5g ^LC0{8d0*,ntԓP̑bA&Srx6Ei=55 k|E`OJ8/YвTދR qYYj=A=X ]G}*çYd}Eٽԍ*oJWJd\( S XXNE`HY83#ap))L戰T璱Rk R5뢏LΊ/[ٳW~pr[{L] 5㞍\ӱ۫MV[M;n5 ˥/30ON#S7\PmpPʡh(m8R7QzRUD@+n9$Å%uYfB;"aFTD2'Tfi%1  rnqw,P aP;`78hʪ[YW5̎D[GQvRpoSql< 8 (L 68m Q|1p-Ӵ" N#YLhEp!c h$ \zM$DH*R)50SyzcLyf5?KqkܧH:>5.<)&Yxq 6oo7T@/K*v ТNO#30BQS`%=p~paǾQ>W"R\G>Azw(azf<'7x ǺMU@P&_.*4+2wyipx$( &9:|aC4l ũɓH~",=99^Tb@tV:QJOA_QxRYF3GJct0Qoɏť>OŹ9P/՝ƋQ}ao]'F0?ãTy$*۳4@f=]PSKD5N!p̢a0p0q1YbIrSxsi'Zm$BoVPTGRÑK>cN6bX-gTQ2XP.\;^|*~o~}}Lޏ{_p`^ r}~Yך՛Vi`KWiZ+[rCԇ 97ٙ Dٗ?vboґi{n5p&~ j~R-Ut?L߯RfTE>li,ōe4qbNj04:i2&q(U4渔y˙o 93dk??ZQI&ssa.n7\o+|^"/d8R`Ki!r9rM>fe,+M0(x fکH"јi\i+#GKL_PGdTa7"2TlH%1QMKc5o!*5^mj㷮cSQ FzB SGҩ|`cӱ ұTLVGеK_JXfk ˃AhcYwu]uWgYwu]u vu]:eWgYwYwu]uWgYwu]uWgYwu]uWgYwu]uWg9t]uWgYwu]uWgSR$u]bW.v]b.vt]QUW.v]b~.vKa 'op:y :N<,ԄRX*u3W;VpqZ1bL}\ʼnK)k|9ƥæ\<ɚͧΖe*z{H a,'Uxd!R&R/5eDDL ` XyHDOY}t$b:Ri=HIf4nN @lҼL'Qufjt0\މ9NrQʼnطdt<7M9`J=$&?Wٲ jyev?6vfj[ḳUഊ1Z2( F刊`) NP2mɬӨ4j4Nb77d%+ l/x9˝@A3橯GۙuΔoEdqK T6"',!.{ ol7#}r/Zܴ`10?bHrKbO&(=v[Y Z|!c0;ʽ3ø3 t4|9z^ޠ~-w|jQs%x>O.}/\V_GZ‘(2r8)|u89l1ꅒ+ o+9?@&b/8@3Ńt s ȼOΌs|'|+BO)C!fHMڭ,HFp b^Q\~;Mi2wZӦ`yP=h_Vo|UtHc 椆v @tq[01>aFwM3Ļ VtiQĔA6pГpw _YhC`q/+ԿnMVtzp|!]? }Uasc5eKBnu '1beP⇟RS=ǡ,>iO뙱˪Un55fc4Â+L9tzɛ޼xu^Xc, p8h@h@2פ-LJJiʬ֢ǒ M0&!"K4{KW:B+Nt@jSrij~^Ds]؞//ka1[b RxM@y)yzPM I p}΁ E=J/l߻[Nݭ}ڌߟw}6 Gkzp ށFa=r9}z NzTʅ(S&8ٹ"_H-3dh0hS)ΦN;q<{ ^X\ F3ROG~ڤCQ:|{[(]%p%`=y.=$*8ʝ7\ /-՞ZU"*-"8ZI}繵]֜qr${^^_N|~ӜŚmJΝ aEzz ƘcdtzF:cg- @JF/tJÃ&5X ozϻp7:| C<C7 3 Sj *Y0R`D$4y2SS%A.i'O`"{&CDI S"!&"@1s$km8H~0nrm  l$Ηx+f)RPw[=|(G8XVK6NWTW#9s9zv8 Sv&U&~RXOmmiO߼}|ۗsrz;Ww.SVY_־rg{Nz;ܐGw@ǜ~<-n1Pel< x^rÜQf YcI"!t5vkܖzM ^#JqTQܦ:2,"1rbrX+/B"{Xo[zӯwnvشZ%yۓ#~ dp_=QCo鸢yn o_-l~<(,zu23J%gBly;RB}ίʛ t x$ )[X1c2b=6MVHKDcT}a>\)7e-Pqdfqݣ+tβ'M|nيE8\ntw=}R.߫ <%^.!ff&ϘO*d֡u٢a:'3#{x6e-V>yΗ*\jyXR=1snVsslg:^:8ҨT]jUyWi~c76wUt_)#WpJ9wz>*v6Mr [I \GtP~0Y;"E嵫`)qRU{sř``׫bu<ܮtQ ̙Kw8esW۟l,]Mtrc=}=49pc~]$XiIGktw;5bͽTx__w.ϡ)V颉ti2ւlS$)!0t;{8b3dqg^eR*_R$U|j!l8#Jwp E+įY=Uhhc^i)[uy{ި_nCUmCOSxA:8Gmi#fJRI|vl”כ^oZ7o~KxWF{V 9aB cnOFƤZHR >nokW)gk:bJJ9u,?ޘtݫm'mρ6m6WA66qIvo,s,&:eW^[ލŅ0Х1毌9G3`n+t%Rvˆ iy1mujUV:3h_[s 1Y^K|xEnEmiT*m DIo}*'[J^x6wcESކN8=|[hpu(LIAPjnw^I^N{l<6v|6ImPpOC(qfS빪շղ+J"ZX+^sY^.I~U;t >FfE侉Q1΢D1cTh%Q!X]Ί:I6)%IXh6T*^ejӖ84:~guIw|_K{d܄&H k.(qCvYiI(JCiR"嚐X>Csꗎ}F)0d>Z7@.G qp).}Ԗx93~/P]5סT'rO8͊(Ϲhx0'ŠFjrAӄ=9uױwcZad`(QHLd$&E:łKs`&Y.B9ձ=磽kkjZ58]XCGQ|hz=q{=6ڽZO^ B]F;_X+Bx S;CWa˶s+h=WEIu_p棃h9Q,D29")`6/zMn3<y|O/5*p;7~)X4u.*-G*JIz>C`Yh\2*}ׇ&{a8> z%kW jL &S2:?51 gΥbPe᢬fy@-oy`%K,T=j%^.ɵP]Z+%u%q}镘V*0}RDuM *X f1crxz:F då3{4 0ӝh\$MBsh8w^sNp59yBKQ*ds+%~.-=ab) I0%Le!Y.8p!θ4kBRזIģ&.ݗ3pPအ?PSX|D+#GN!L4a2o%A[Q:F *Hb*rV1 3I)dSQ2 F1{F f4:(JG(ɵI,B.߅b8uL8,X$Ig ta7*"N'ʥ+1)RҥXu{"}; e)֪w ǁs! ը3' Vm+%F3+٩ISvxjz 3z:G6#!JiwuZ!NW Z@WtOCWWpVX-;+VXOW/]zLaI SJpO}YZNNW =]!]J +L ]%XtZNW %,*em3$"~)QF^>nJŻ'Zwƨ+4В; e)SOCӌ*EdU&;VeXW jDNW %&=]!]qy c~3*՝+@+j;]%uutc!JewWw z*d3+$U]:o6)Jp ]%Jz:CR<]]`Iqg*eVej(t&t9w[dn\g0=UɗZ(yW\z~-PU8u*zp剏=]BIf]t{zҧVJu0?yzp ]Zx*$3+"mAAz ]\;c]% Jz:GJi{ 6DT IE5{·X WR FpS 6R#L;vX-e"mWKz˕-{P:"`O.+姺AS%'wJ ƳT3tp%R%e3+0ɛf>`M;CW"JhI*cs+&Tq6pS' F#6^ tVA/3BHgŠbPo.&ןLl1vKfI Tgo-|,hfz39IGSňbHG9 fmSy9)A2FbSFB Pk}|9h@TGᢐs]A&=׃og&N?}1!|¯4F9A K3(dst!2KTÏz@TN|>z#DQwzul*%+SBXۭ9ZOXP&t\BW n()=]!])Qz|TST>d +]>JɾΊD96n9\yodIe '>7q8wyjbq,xh6}jc"Xvh\ՙ=Vֻ %ъK/.Rt-u=uNZh5>]CI[&jЕK-V 3tUB+P*3+$)jUDX RrePqSh RWvZNӀϑhtXeV:^'"^4wmƭ[3w׫p>z 0aI?\Luy?7^?`֢hK* B~>?LFu%}M`x=#9找Q߬=KKMZf"nRR_B~]BawJkwܞϗN)R})vZ6֡Y(׻g~6|L&OA  71%,}Bņ|-_h*l[E[s`FX/̷ƪt|w,7酟zս]F~m~ؓ(<ss<Ԓ+5Q:эsw;xtzh9,=*&9Ucjϣ9jv?m 3 )$TƂd#"ᷠ-#,J AzZ3M ,Rg2DOy"")qJ7t b΂wDðq7g4gVVώG1ۓ42_|i¤8N |0uJ]{oF*@h1dwMrA&%Ggs~$4)6mK6=?VwUGW6( 7u'=!>C5Ç5b:1n9_7̃R1+W's4 9K`C/,} 89k~]@ѝa)P+y\0؄y_?!D!e!x0k5f,`ZFL&Z i3OҮn8(lzS&4 7i]F~SqRLn.(+nY~8ݣkZjV :JZ1qa˯4w*N b{hZк|:Zos\tf; ,;9պۦVƻ;y0L79_QX [v/5;ѱ7r2t?))kit{K۲%1sz[$4Ou\ihr:D`j2|VݖWRT{AVdځ!bS?M.-w7jEܧyE|$/"9p/"yQ^Nt2ߧ?Y%oMq9/7%ݺ> 5ttv6+ҷ9.y}7%X`T9F$㑅H>HԔ1тFQ4`pH+=>A-}͗hhgVyfp0z.{wcc h>L>DYe=|s|[&)A6>3h ,LRwͽ-vg)JzI)=g6pfϘ=c>cnX=g'fc[ky]1,}Wj']DIv v7tHv!O׍nw7^p~yt-/2m>-hU>tffY7|dlyfG3hϰs5jԙI=͞y.vJ+::}j*h],{ʋAQEqx'\!M>%/+7z8vYGiY4LW7g++90+K;+ϱjJj p;i^6 }eCZdƫ&,s>G{͎i .;Da <* s~]`$S{+%Ruܪ9U:1w/sTI\B})n@()&)Q[M*Ffd j0D`N衞ήSTusϲ>Mf}&Gbˀ" OP fh/#St$LEt!"'U2`@e_E.qVq2 PA3} ('@c}9A< 10BB23uS/N Y$5Uϱ.x?sZi==&5=StX|(GVs[(7PubPSr+ @uԄE(Q&72FL1Qi ^q d>Fܲ;S]ۧWOYf-Z&pGumXcflɹ[:;\ϫ.('\k)s4BZoqc2aqȨloMK"V#U1LjLS$ f4f`g1WGKLtCQ{0rHRN #52(B;NNxQgF2.R^d4t7TeѢ6. .SY)X(:xR_*k;g~{CrQzc+&/j$rFq,֜: (S^A:Rb&]K1Fe µf% Tz1tFΞ!H))D>b(JhEm,(L+b9,$D vp M|e…|.BU RY ӊȴT"JFc&D4V tXTޛʅ7c!5T&Hﭑb! ~em mA91e| 4=MKM')|V\J^-Π[DC7{wjpA u%x9h3%| +U"Q~ 86Oz}[`ZV%L)J[+VQ՛v#Цװ~󹹞\S!`^^ZӘGhAc,t*V(3*h2:ڀmT 5mѡN64 d2mѮ3 پ҂ml6T=N0@byϮ4,Yn9 Z{k06#3%hOPfܯ(:"c_n:"QG;"QE:X ͹( SSXcڳ!)I%{:OY嵬hA)0%,C`qo,҆#Hyg`D!%% 9Rnt)GDTɜRafXTqLF Ad +?Vi8 D>̬,5HGK[G;))86 Q zyA`&6YᥪW"8U(f1EM#D3Åd&mbdm"!zFꖑs@gp'Pn%5"y5Z- ~-̲U_ wh(9  *.2m\4Hü~cvnEjwSj5^uQr°ͪ/W^ ن!ErBI`]+tr&AaNLQK;9)f /OHO%P0 VUtb քT%K -J;-ӴxX'Yp]R-nW_)޾.rINqW+rGS'Z xiyi5=]¨l߳g[gU·Ʒ׳U`v4 b.͗8gR'Q6NB||brGa!6Q#1!Fm{VI\"EœG_'S.B6Y|IR*u*`Ml8ngnxuO N݇ߞ>~SL?[8Jᤍp~}ohEƛ teO]ONe[C\Ҙ-߳|; *wv~"KxWcGl`RΆE6?}ꯛTq]R({  9Ҏ n|‹ G(j_l݂۸vj$ vN0@(C`' WdvV`#=Rr'5s{?N&W'mPM`sʂz"@K -&zxo=gd.տweԄm91%LkX<ܡi0UVfLu>IclM0դ.co!aCfЋ1OjLAtP99 ]1_ $0˿xM\' &uD94"<Nk"|d! .[2c;4.ŧ3Iv\ǰ#`k (qb+t@ؤ )Oc.*52p9 gALeXdomh߮G6ؐp ιg3hgSd>$?$zM=wxy8M.n. ]qef|J? P/ [ iPUVeF V?JB&0ATR];W2:?⬋1fP09sH[GS)}Q*En `pyå =7P=!HJ1jJ{( ^ J1fӮ>8ߤxR}?IZLi[E^{h 1.ԲDKOpLp&S&(|YjYT7wowɻ7?Õx Or2yG' ,ߋ_)0r {FK,Ef̮㢒oa?d>j{EȠf}q$՘L ROÔ&X~zL%˫Mu̐3Yua 12*g(έXlW?+eY}"QuE~˵hOn|i:.F5qVW6} S'H zL#AKȱ=A'H4J+v2 UJ.JP ԋ'(f`)'#J2|*AU/bՌԛpTr<*p DP/`Ѓ}vTp,)ȵ$cE¼8Osm˃w4 oߎ "JO7f=d}%ĔU> R0=< * ےik Cm?\~yvN|}k;p-mfI%܀3)Sﲿ/>*?%  ymQH{VyLβwLaM w'10[qw 0D)'{_>p O}+>.o0ufϲ%ݠֳA0U iu*XUe=IUL3YQ#XmT~ݦnlS \y;y}Zu{)K Ҥ\uxuRCݗ_ky[f7 xubAJoS>LY󒟝7Y%>K&5WjwƮdiW^vEt f7  ծٛ0;dRhw >+3Lp4dKkweyVƦ]x,Fߊ1\69skwc/)}6nveb|<ކeKK!L~t~S<~/h~~P# {7wz 2 .gnauϵ;kA3|%gS4a sـUF9 Ƙc|:ƚf;Ϳ悞 ?g{WVukwmʍ sl޻%ږ֪nvk]zKDQ;l:Rܙ:P{swgBk:Zi}79ڼph/vAG(!BRyB*,ɈF"Z%QzJ?@fꥰ(+RΫ٫We2.{ Fr2/8@tMU款s3$/hjx&Uɘ'$t 8U:FGU&(o~bi*ë|0,2*=/Kil(kK?gQxHRX;+w;]lr+>7Mޥb[!|0J>%?9 1t2MLpV)5t-cp*5'$8UcW.Q''h)? %}X)+}]c<М'ٷ_T&bi9Ee?t YdFAw$JRK/Ɨ`0UY.?I!Br MSpxjͧnۦ Fn|9IVM0-:-&I2uƓFə H;+^R# -nYM(kS a8=4hSfv}8yo5Y=2R2mdz-nliyd"Z(֢S#blrK%bdF/cLs箖ܵ(m~o|XT`z뙫nF~Bn/GX¤OP fh/#(HHLBDN.dˁ8,r!c1hL> 'H 8k c}9P4 y籯Ǵ @A QzHLdu4(@%LR#1sJؗ=s!99L<BpYVIxW5_EF%!fn:F"'s!OYc鐠}''yvLӯsF:ϧZ(Ii%F Ky΂^Z8rݜFu$#;h'ž)'5*j/LL9Y|4g3ᭆDL峚ꦃ| tBiR+M֜@`%ڳ!F1Rk{HXG"\JlB OA#RK-⍲ | \K|GoNK:٘[wn/o;p&JCsxŬ #A:: mA"*xoJP8g fabSzOzj #T(f1E4E(f F[&/ ASO[F*tddp{tj_B8/>`n6>KGkܟɫhu61.<~ >;3L =oջ"T/f* .USk׳Of'?^^?V+fL#|pc[W9xU PLeE؍31y˦iH45NS'7>ƅ|-!)> \w-7/g],i֦JR]-fhkXHu<-X^B6Y g2?T5x[637|rׯ~u?{LW߿z槗p|՚@~=5~m?uYapZv -՝.a[C\1;p#` Je^}y9? f.%^^Z\hfY NU /`_.@,Pu*.se BEޥ@l)G"ZW.o // }Ҿ n_i&׃;ؕt|cQڝb$  E (B`I62K@wV`#=Rr'5y|U>c|q5E5 JHJD(P- ʂ -&zxo=gd!}ޑ#{kBJz,tȴD*+3G : x$5 ɞxn)Mp.zRrh CU)2¿#ZEPQ7}T(҇IZljϻJ%s&W,\!-թ(}Q ~*oI$( fکtEFS$L vs2- .2p:CQ{0rHRIpxadJVLc1>iU!x;:Ew})aM`1n hr[o9DZtZs}{O0 sp48: RG'6Fɳq!b! >ڠ{GdTa'"Rk )d4&jciʝXS:QLEm K-3)DI 'f6xcGcPdG(~⾹ I{6{)}!x>1L|sox/;q"!+2gJ\saW D+W}Tp|>+~hy RN0#BXYL^jʈhA #(j"*$ҝUN_3^1gS۸-0W[kxj|♇m$4WịU`1Zd2( F刊`) NPFasT;d-,Vl_%Úmlj,\liseLIr?c]ၱkT V0bۀl]{# :좻oZ_j+9+"cR;ĬW逰I%dgZ82v~?]8O+T8pUσ")IICA$TUZ{1p2b1ڼ<01\0g<܅,߮<6d)J5Fڥ)2J̣,[vIw-mz7]_G-iD ai׋͡ai~q- ]Nج"vn'e ܶ{]}+zطID;}P U^ \4k:)+d9AGc|O|kU@B}awbƊaȑvW$%?eNdWg5gT04w.mV!LF;BIC|飺Ae,:u~/Y1y,![nu O%EAUux%2Up;oB!L՞k#,RR6)J{( OqϤ•,n#Qkfz/<Uo(1r𖑀fX ײ@Z +@Sl=N;[B7zf!% 3Bs4vHt6;ryj|0uan=KYH~.B⊶ f/vvs@mzESaypaiprs!21o#G>LǜDsM.]s=|X6+ҏB8^FiÁ8o%A[Q:F T1_#8cz$N9W,<ꅧࠬz ƫMjڴ[.e$VKn:M(b) Np@X`0FACCN q#i43L}>ZEO/y^1ee pB).8A)Y`Ģ>El;{/9jC%M)_ݏ郰ZjUۗNZO% 5ħ 7 Y=Qww?'L:ƁG?If|?K.~[\OӦK3/?|ٲonYԨa l]L 3Kdp-۔'tfiywREK8A:?R6RPUbR{l6y-i]8?jatKUO~)44/yUIP;ɨd }$\OL)n"ȸ .RYӎ2v|=Mj29_=Ȋ[WL ?jEM;kHMQ)E 0ǒ\p-nҧmDg}%E r޶!8Yس㕟7oNQBB,[JL.0"4=9O NpL+983S{g3 = Lj[Û$<;$i_> :e+!a|Lr7޴L\HnA Y$W((ZiizEnBaሟw5z sZD)3ln%9 ,ӽx0Z2JAkcB)7jn5S"9We T,g ( HRE0F{+\٥@~ſNpC<UX-zg՘IDk1hFs+%H-9lOz;m@AfoW_P44b[xEB02Fg߾մF[1 {ikV%-I˯$c.TqӀ]Lpo}T}ZUCo{Avu{]hzk&Ck߂Al&;б7r7ŴǕ[ZNJ¨ǖ+B#K+c_FuuH:*ʹ9CL|N8`O-V=~6!չ_M8Io9J,igeK3=H^ -|^4ϷNK@u7[-2M_xȞ`w!: tS~N -,@Qw|0ar({+%R :n=znw|=A9 v*R QJFHqEH16N>jK ^Ȍ ^`wqf"0|'lZΡP`v;NN-f-oMEW r3ŴEX` ORoBI TH'K}Mґ0хT]Ȁ%ݳ)8,r!*S5;$0OYI 8k c}9Aݳ<+10BB23uS/N Y$5UnǺhi=KN;hSt]w_'s;68LW؟z Ky΂^Z85Y9vy~{?j~;⩷swt+|yo%KȳWu6-gtU]PJZ \Gk鮘RVXۅbMVzMZ^bk8|g]nU&SBA)U!THyg)UXE$QDIɼBn$h>HWK2S" AJNcRɜRafXTqXG@\>n/&P~, ,BUe} cV̺ #A:: mA"*x7D%(3 3011'!vzjiF**71P4: 2bL"/+ȩIDB-#=@hgp!|TgɂdHVDŕq'4wgfJyɼx"ǰ7xդ|Q4gi?gfM^67Vyc%G +Ѭ3{= D-) Y$#; $F9BGgN#Lݑ)piGQ~VÀP&$H.*LW2=}v::;MF4}ŗ86>}|62)[}󯺿R O/?ʱf@:R+}<GnV0d# --m~s<]bON`TZaKuV5nY~|*||f'#Fx6K`jo9#Q6~>6wFBm#:GlY0ևѸOi)> o.\GQEڶVI{yE 9.B6Ygߣ-wJXשg`T=O NݧNoN?|uO?'Ο`mSO¯Oz ?m?"] 5CYвUŸ%{}wGb-/O|K@Mp8w,ݝJz3| l~YU9YnVo˳ݝ|җ"D/"ɑv|PG&<߶9Z)up 3);M°l (oi, *0AG{NjS8_f?1m8ƌƚFo%$z"@KǂD4r +tnis^}At:ׄ.#Lkh)ȴD*+3G :xTWO$sFhb(6;ܻYx%Sjګ_*>QwOҢo ػBtw ug[>9 @$ZK3MX4BZ BQzϝ>zFϐ n빼 0aVJ^ۚ"0m+meĖhIp;dE !IeQF&g%4vCP ?{۶"{k 8KqmQ4-pxE_m!JSp͒,٢$۴-L'!gf h@1Ygh?_ˁ~凰6OQGwEerF{%\p8(o`ī!HÙ]tw!HRg(B J$ ,e3' dRQB|uk BmI҂HJ; &#\Ӛ\Y#Z%k1$(T>x]Kk_cF0dlT!8Gf(~6;hPĵ J a_c{&>NM_`=CͭA[ #zcPo[oi5}>wXNQGW '(^, pooԦ훟{;pO|<~''w_rFI|9{<-X|֟!i =-6[כrP/*fXZJDHZPJ,HekkRˉ&㿮__=K@M9u($n:@mbAƩL@h5GDCLg oO׻25N/&=m5erִ3dRQL#pig@c[V[\:E JQ*dW[<~kYv[Q*jj@,,:R.Z@TT)p*T_V5q{jWgE']hoBCoNjx9 3,Gd >޻m-NךB7f.͇yRJ^u'͝_,^n1/gѴxE͕u"p+mSMY⽝ɂTܮ9%;e߾P *nK!NgZʊy*պ䜈EmK'ZDŀ!ٽ'޺myڶUa}]!(eZx]ІHl8Qq ̘ ,xAE}o放/L_'nMr"@ ڨaK>|r&?*է󾜅>{`ѧ;=Ӿ.6OYZ}+ذ:4{a-|M~:н^3 SgzM+UmGab ȓ>|rձ(\N+;)*2|c$JJ)xʹ!#  I #-:|t{XNJ;clw?ȺK2۟BpTdO0Ѯࡻm!M`OB39j*KM)R y\4|]XWx*'t™2QrPb__EwP7tPFTb3JJM$ž*LdbH xbAwR0cPKB@vQXmcU-ތ5x'}ӫrfM]WL5AL'IC(UN8f!$sb؅ JwL4Ғ3J/Ԋh(ǚ_] a{Rq櫰=*w+8yx`-[ix#ʜZ^M:@:1 PH]hYB-zs0E;ət&g@ĤH*P Q RXjA@U)< ׯ&[Lz5dE\z AӇ.fNK;%'%YZcProdIf%ĕ:Z?qk5xC@3cc Ic1M Os)h})MT=fZmS?k9%|}|/ɽPzci ݇7_qGŗ?U DԠ4/9w{ sL3)-8{֧ V^>/Iﳽ!aQD5N18o3yHF%`)MfpR9*;K[Os4tSr i-2vW9ϴV";`{tL+U6[OLw{J{;iv vG[y!ۧ :.uuY=NhE(~8pg[زlun7Moo)~CK-7CfwޅQV&yg薎 Ov(-Xs󦻖kj9hݴyNs(jaS0ʸzE\0Ն/PH."C {_*S)e7 WPI|#>~xpI(nR)aV̖AXAz"\p|ݔ|}sP,V9Z%9:1INXnj"xjb/LνkUW`>m>֞۬_|TM彿6=ϧiԱ&M\b>[p\pK !Bu!%@[IpDxya}  E0&$"C^Ac*d,`cR@3XSd/Lix>pq)@WSlCi%CdbQ\;Ey:Vx01IMx'ev B!e 44qӚMЍ z@ynDtz|P|v<޼DwF81ąK9#kJdAiJddl,*SVR9OljSwّqW x>Xk0=1m5"Fe q+5f!</Ql,Xz׉i9ˋGJ֐u7Vbu)q>^^1.{Y%RrPW(wGDf/(z/Q"peeEG /(z?*%߯@Q}@Q>Ӄ$B#\CPU}WJ zJ!+$XW\nE\ejwq\^\"qbs`r.&Ū a岲eNOw7AiQ0ZR^q" aſUۻoxV7 8Q~dY:<2IҨL*i$XSq0b:ġL.3Rbi@CBH: q}yWZ*ST/^:bQ\ cҽW3ٽFq%A/1\\sJ.dp [`K;~mqv7 zP.SBϤ|ߪA|]rd>p0JtDxžzBs]9$NWY֒9I/mѨR K#=@P)3vI3v|l%:SG9rv>>gmd}gR) aʘ#))-t9U&Rhکg)3P l@) 99mJ7E;XZ{:H9M*\JiЉ(#J߶KJ?brMb2xP"(%XXSŽJJ \QSӺWvLuwW75-H޸KWĜۿ4s.xew~[NHHڑX?y0}[YfhC+3g1z>ӛ1٦/GQ/6jۻ**Ǜx8_#y`_]_Nc1NEpŸ蟷;eLnT"O^w6շ ?8Cv߿o?o8qS8j#A=_G~wǻvZӮC^ޕ6$B4`Ó<"=kcAjnSZ6Y,b)U7J&2"uO]/k[}}>BX)8r+ޟ0? %aċW2%&j^F6YzؤJj0 7A6Hj>Ejٹly.ȴV`I*WyLlK:Ft YJ+z75J[<=R a[8ͼ]l+|J؂>ƠιZ)HZݸB]m\.݌D~rڂNreLi 2A7*7[a>H72i||w=NӁ&m2'Gm$Nj\\ꭌTq<{x"S/}Y~hܣS LAfHY0crYDmp;5AIm#=O.f=Y7v>3C|J)s^Yiozxh|EaKï_x또Żo'`g@nB$ny6栮7QVv;bLfoPiU2l鯔8 8}b>MNi͹ J8k h4MPϹT~Q);tN ܮ2-W lm^ *"4 d:c'W ȤE| >?z![)}򔨍WyKB9'a,PXjeTI8+FEM1@FE<;D\G#xq&񅆺:9-3A O_Իgt&`p. ?iimzw~^hߖ϶юC]`+,bHi?ZFi ]a_El܎Mn 6G{}a_ >|!妢pNS-ZF@lmDQlV!.tvD 'ŃkiG=W;x0VܾCȱ*W|Re('>[0-׊Qmv =>l; (kNB390%v#=@ -_u9U$1KoId:P@uxG(*1Қ I aOUE& o#Tk3E4ZEj =qo5ϤϜU,nCo?]>Y.^Uu !HLDS#J&YUt޴h;c,XK) g/b21=Dh|zAmlw:aUpRY!_ ;-[yBy4 s 2j XE1x9xԃDm˵ϵ$`R2.tAX"RYb3 Yi`Y)s5z",LF{9keG5%Tn9l±q-ٿRvaI/n!X)[ȵR"}9B%;,P'h w݂d2L ^Ag,*^c(ǼLx4*7cW&wKI 50Ryo@b<L\ gWsZ12wi&a ㇪qYCaUn+?Y0>4v&tC { Us]>L]O=VsH%LN&[U#n{ e/fB.iqG]=MfwO9t1faWGIz'f^{?*l.=Kt?螉8'y[yE![s><{u9%ln YAUdE[qCuE|uNWՖmt賤X-!EN-MsY}KAi%Ce:YX4:q6,%c ǃ ΰj?|lvX:HwYڰ@ мl Á);8Rhu^QWCvF>ĉ)g>S}`(*JDmpnjr:;[`uQ^;ũVp5[2x)aS>@ٖ/)ѠK^r@. ǰIeHL%siu'Zze|M&izwLטxHՋ썵JXt[򏢨onbW Z׶ ^Vs= VRIzӥ[mO—+4!)(@AQNK''N@,-crp]i3KiZ-ƒt:{tn쳮ux?ց Gix#!P;Y[W} H'e E*1T Qi#QdeŔ3!q1].iI.Μv|c;֞}ncYk:>o^h6_Yd!au,80^/Y9`!_rw-3XSC4- J?nAp-Zp2kxf4]O2aϖCfdM3q:I4:-}V ad;]dUKZwk t|]Ad{>!)Kۿ^Nثeru"w,[Ú[nǖRScN܅q*`֥g77l(e Z΢tpkN􇡎[t="Il0[=+{!((OC,NHuwdƻ*k5ٽ/7. 'O4pIm4T[NL~6$%+x`!-T(^#8 mcS_T{헎oduQug[)yQ9=5ҡp11 ƑS0PuR49)*J6 .#÷o눒bӻnnk0 yg FdɀE؈ HaA./rA(xQsKBf6U9L6mȀ ݳ)8,K]T:SH)LQ ZdBPCH褝kYs:x 5Rj J35\G $"r2c ю:N8픜vvhEXW/.Wr75_zgm*~~އJa4eW\^m^N?w xnw9C/9݌"X/0~7N# /=nG]R9ISd2ERVS@aYkEÏ[94?|w?P;?B cBi哫${,Ccg"/凹3-P(rW s}w)]PJ ,,7ݴ?2^{oя-<<Ɨ-y7qf>C{aj,wX"G4/tX9K7]n\_'x1992 ^$9|ҵk*bKju/Uܨ}*l<̍xǃ|u@;; N.~Vk7&WU 4 CwF@Y[ iZ[&Rncsh85D2~p5'%*E MhX eWY!ؘ,3\BWhQzzt ί8jZ ]1\BW F5ҕK_G>C7wۧbPh"dѭ?תd;2J{?V^=Who`) ]F6Z]pXbq5{NQjs*/ϣs)[Pgsk.aCn1!g/p.])Ụ3{/?g|' 4|H~;v#96[vsӎoσsk>ɕ/>8Zlzz_>wt/&g ܽS︳2f7cli{sњM_8vv%]ܒSplPQfhe~B Ӣ.Ψyd>/5gg09Lכ#v9O'*ԳqqĚ5G>ږI7S/yJz]= Ugq5ϳKH?sAiog>!Nj]5Wp]fd]Jz\vAɚC :M(A{3\OJHtVTF\U)!C!Lm 6FPd,IZ/wheK]ⶃ[ѢkMɺr=OG|9L,AT[1ևAJk\3: kJzpv+5RœDhY{nt 8 91Z9?MUWUiԒ{{"Z#]r~SmɺM(Km ޣ3dClt)CQi=ԎƬ"1F.xSU~b1*BBqx{z]F(Hӏ0eU 1Wm*YtT:DyKR\[P]pޜyVUT/yr}" Sf0j$7.(]cg@1Ú4 nN';Yn]=Q1c##$63f Ȩ*ϡM,ɄDKUTRJHQͺ}r b-uߺTs`1vѼXm6:BQCBNFG6TBق1K(#mtA,ڳZaB B#dh٥=;mTȗ? *9OL$X rS1أԡti~[>8" Auٕ*J] Ţˋu8ǖژ[Ck&E`So( )-K rX/H1Ű$؁~ok:Ye; RcN)W̙l`_Q&t$6G$X(F ʕY[4 U TW7"j,bd$YFhU%ٕ,i_ts͐jPob!w(c)(l DaƷŜE' sG!!.A B*)UKx+lL6 _ b7۵XQ; )E4GRFܢ=]eQv/ J*H{m>md0Wz,I$$}AYHP, 6C Zb@$Fu{PH&C@U+cG=D & ԙJA=sth{ .USW̆KHN+ .w؀5 !pBKw#;vݙT?m|yU ^a}u`mFL`->PT =Cw / TxU2G\ t56b ڸLcuLC=GN`)>aѩ ʃJ I"99d^S1P>b1w g4Ѳ<=K`r@!YF[[x+$@f6[wEUUSYߨFbTw&dա.!Hb!a|O]D]e TG<)} ]{ #Aˈ|uC.۠mXAJ<.Jf;hD4ePwExT nk 9u֬kdž@Y0:ttAff=+[{ j\ U-4QG[s55ǤͨyHY#h4v˨ 4-aFϟ͏%з^TfMj:j7LJȀ=aT9 ]Qn$]qN(]tR" Vd*q(uAA0T g':<<,[6q1LƾXA|],`E^1H"^uN P'w l!0#!/CE7 F-]..*j֕u#<&guK[sO*6qLG[Pc@s@@୭EyO 5֢Y u(P5jI:;1T BVcږ|z] %ި \K}4 ŢW:ȚPi`EHN֔70hن҂f@2~.ΟhJ7a#U2'EkO^ (T .nIg@b- Pil/R- w\CC,Z*cQ4k& ") %]:)Z͘-, J+e`j?&\nmп㛭l:P˓#|g7*ʵw~_iubnrӔ eY+kZ~;OW{HW&@/N5.pXnnze(O!1n_'-=}'f5pc\ grgEEgVU0j5tpi5rwb1 ]B\]1พpk+FӾRFJ Z]1\bzQ]~RLQ8]=pRU>.0]=m|Gfԕ~]i=:S^]1WCWkZ ]1ZztTYا4ihY}K!v݇7*JƮQ+EaikV#* Ҍ2 KF쮀}:;8m>jy !/G|騽EG]l29DZ/9;Wgx{vP?^}i>F璻pyVvM|vaLLJMWߚ.baTQ>óF o9sK>11n4PƟqI>*}}_ϣ܈~<89W\M4NAozC;wcڍUB%M[ΔFy׷)Q~Y>4&km\N~JˣU\xȺ37ٹx=ȩҵn&gUR 7MgU4z֝.T46Lٻ6,W,b݇cc{Ĉ`cèSdL65A&[$ŖHȨDU^{6y c,)S\%rѤɁZ+SښGhNz\%f'j%:xt$z*_iT"O0ȕXP+Oo%*E >Fs#jD.bD歹z*#d`}D*K豘D-?xtT-+ГDov"]F"kvBWQI\i=Jc2W`\%r8s U\=BsEa'%Gcj=xt\mi#2W#u 2=%*5j})#2@0$ؾt"c1ӉZ!L'*U*f, rD*6F: q09ts$-|SϑBE~G/dVgǪWn x_v@8 7vIꃵ.gÅa㞃AsH+kq)Wn| ٻ"Ut!a[o،um#3uEe{0gSmidCGMX,JsFɍ =ءhޤK;)~ժ[|ǻ,vߞ+<\w@܏օkg=>lVz ٯD}Yx&>(e3#֥`jUy[V7X3sRp LЍsvrP']C8cB)7jqneZGEd|/w9lVJH?B"Iݟ/kriT0KCw˼qZIπ4Wh O.N??`<k+P~҆P(63,t ƅ 2xY(Meb6 |z_h*販eE2 )x0 6ZMs:\Q>W<G\@0Px0N+7ϡc |=D*0 3mhn"9 i=X㌎)n8X8K+HWElI3 ^,>4=M/6IYЦƠEzkcw.HZqɀed&#QHYu2LwZ ZFL&Z iGvf_`mt֚ ̂H R ˽)9ntcmklW1_&=e6缒MKjrR0,dqseJ9Iy7JF(,bjrK%bdF/cLߋZUP][jWMQtV;I^j'7O_M?M & (LHxdN0+D{M:@Gt`ZD"rRŞh_H"88fb A|@N@kA ၱ>r ه]:_HǴ (QHLd$&E:ł蠜8KFb|mu!?jZiݥ݁R3@h1:.?ό4.?2 -I])l&9ǼZrۙH]Xs̕"alF罿;ⶳKU _>Z~4/F[ k_'wK@ʍ{tBiR6+M֜@`!0Jha93Q<b! 4 (WKM)sDXZ5(K)µgl8z|3l|`r|V]=7BպRV Oo_j~i_rp! x}3ln"kJ4 0,+58ߺEuL Vqo,҆#Hyg)UXE$QDIɼB&H/:<]6ISk t8HwDETD2'Tfi%1"Ayȍx@"T Z5uGXut :EEIUL1Ʊ oJP3 3011tIk+嚶Q$bD1)mJ'aDp!c h$ \zM$DHUW/iH!cM׫UY Y>Fȸi]fu:?tҋ1wq֤ӘRIgqlȫ,8MB:o)<䧐MT a Igv)s} $; t."AdN$x#L݉L/N&S0(R85!-!BMW/o=q~$A(6X|y$.)HYgOJ+tzڙFG9 DGj;mU \kBIꮍ34=x c[lOYU<;7}X~|_x>v^,6$f`u+R}ۥ$QV gɈg/@jC!Ru.ƖҠ g, [PZaiɋłѓ6{t}N6WII I s ]IȆ1KfOp6f^Rq]d◪^Q\oq^U~~D}|~<߀)4 oOϷ#@݂+M훮1Gޢi4-9÷hW ^Q*K,ٞ]n@~~}y^Kk.^,u$KA^"^\\>٠hvS%|nKȇڎ4rM}y0VYSN^thM6Ռ a|cQd  S`W }%AF=HsIMm޽`&tD߁b|q5E5 JHZP- B -&zx{fdaRsّ]vkB%'LkDi i0UVfLuvj/P0n)~kפdb%)N,/*2\A ILCW=uK;kߍk:FmRK4FR˔M>|_Dx9K#` 8J(qGfnM=]5-bQ df0+ NUۚ"x^Q6Y̕2bx$dZCJb #b%4vCP q쀝 x gL7r<~F~]'aM`1ADZ9ɭ8Fq,֜:z/4ܷ *i5p4Eh S1u <6F}ȔB\8ӊXHnڀP3I#2\*StZ5S HuQ25 D4V=ʯ֫C]uu`*jH@X_jII`m91e|@outlv=~乹 hNmNٺ:jg7W'1ܛ09Bt:N<̙>ׄ\X*CC#y#k7*Rp8ᗋw%XrQҕ<)V2""&ZH0<@MB"|/Y}=0'Ѹw>Uޖgތ@7 J}s3v2s6vpx8y V4BkXfTetۨQ1,%! j@ۢC{9m1k5j;Vwfxh:Ŧ9hA &`.$1) WΗi9 Vʛ;gaȿ> u:*MZֽɅMٱrfAGr$A`y 4o%e_&i=U"U^vџ&nʪ͕tXF}[cAlhQ%7Ł=+Zݏ`f|Yӣrl@iMnX̵Q*g er+gI1i_ڙ;gEk[7J}t[cR;ĬW逰I/N)Oc.*!tb;APTnڣPpׇ]7\郿f}_li }"B>IC}$VdSMC_/ŧbhhߴr]vKGtWP|A ywcuٟ<{4&FvyM͋l}J Ag Hd`\45q{W0#)ir\ >imw1B[@ks\ARGW[5V,v!G.1]xs5 |-:UvZr+x誳mf D ״B+%p(7?,J5>ONJb=yXq(l \(9 1C"#-UtacÊٻ޶r$W?}b/y^ i>,xuԖ%3>s%Y7ǔ,D9dU+. !=~Ɯ2e2'lW2= zT[AǸC% ֆ9+˭"ᩴ24_|Xg1  `ryå M.@̇1gzRX NVF_ex^| yOZaN )ws}Tm7mhvHQ%pv$ĪS9lΙ}r`-jayZBJو( :P 9`|r.t-?ҶK+"K¡#UNC\0e-Aqy4{섷d49DYwhX)`ZEĖPƽuH ÌМwd#g}O('d.,g%OuhQ\Rk=Eu߯FZ3~Xs8Nrq~g8eW (Tb (M+fB䡠2 B}a x$ )@3;cƌL"#chnDd *@ .r㌇Mhzn&֠I^!6!*3EPgݖn߿tiW{7[I$0f;l뵪6;B Z7Oa:N;gZVвunMߣ煖!Moݬ|><e[2u.(!/F% =knl};CcH[󄽷D/؁`3ɪ4-Mָ$ 74ARUHh~~h 4Hɯ6ơ£R= "`vEF_G-eG%ĺLh ƸX"ÿ}k^YqA2h~ ,@#uM-m[iU݃7IO :e0hv{_ogՍ ?mm @ qp7ݓ:MSurs** ҍeR#_icT=7a^N]4yU+F=9cAwba⼑m1JF53T19+Ɔ3 w<9[d)^>FCkP?N}Vzؤ\J`v{_8?7v))I! ' R.1s`1c̅Vyxp&H0S_+ˢxUa~;}'.Qxc))Zt#/itCzѬ絙qSд#tPJq{Jtp+/Pܧ\8L0(GiG;kHMY)E 0S°^I7SMa OI3 =+Ȫ&JlJ Yx4pgi.>ξi3 9R+X0`U#"ᷠ/HlXB;K`frή  4#p|GGnCc{ŢC^6BgB&rz b]i;%3dɒ\cq(S~/;֕YFc%%7iQh 5T(O+LlU>9_1HjcB)7jLÝ dMnn&V9/dk6Ԋ`Ռ??oGiجWnRraêƣZ"Iё@尫j:2PIꏩgCo$"yuJ/dλP 1Kv\6[]_|ƭk}S +n~L-tlNTjoa8N\4k6ک 5ߊm7pLJY ʺDD1/QG[?.屪Inrt{ PE~VCnt Y֝8ye+r{/ꧧO߂)팶56WжMˊ̹ ۯ('?dj*W.,3إ5+n4ō7FShMq)n4gK-)7 ܎eO?eyPX*V1fdemBly(@щFv#G˙.nG k,zg՘IDk1hFs+%"F;;=Iv,NvC6`̆s{t#y5RgBݖn߿t٥l_;;ӹIYw'ͽV; }Ժj[gL_(ι]v>q} -l2_vwy-~Ipwjs&k\߃|faaf../&.>“_MmC7G2{ӃqRu 0h}; ##~y nz:A@Y4nbj,AzSg~ɠ5Zjw.˖涭N*'2zQE4h;߽4/;63ٸnu{m{:veK]@Ԕ( 欏ſ|lXnI3"Fm)'4ӱbJIj*Ǟp}SDr⧈U">.=?v a⼑m1JF53T19+Ɔ3 Z|<ٔJY~>GYgVkDbT2GEREa͂3G,%:4aIP@M (1xhȩ3n$ڢx(lճQ4FY4 j;4R\0#q0SE5;E3 /AGV5F=vg.Ԏ hoJ YxF!9\:̀tB䌂JbʂV ߂  !p,/1s3G/S3pxi&g,Jo}hVoy:۬Jƻ)k"7/69uIO,!K؏G sIZV>ϥs\>ϥs\>< Zf!ח|"@*+"9<&,b\EbŨ0WJbJXfٹ-NV/dz=FZ4i P5@dBt[n kmBT%fuA!NOmD cXSzlɸD匍o5mٖr7F$ܤVi"b ,}_&SU.d3.܄TH&MPt 3ڃy^$'P}.Avz":tƛ."sR*>o;%z/ɧg0S=]Yb -,Q;7 qf%*Jb>T|u^|׎B})n@()& TBGm7 ,R.mL"֯e*diɇxÕlr]z *3MDXȀ„T')?=mOIeė@T!G$Y]6j}x.BJ|16MYglc gUUY.p'!(fB.ʼn6`d1t2Ɂn}BT;k;)Ԛ= )>,2%Ȅx!7xT x'cFSBTPNM򠀥H.8!(Q{,m]3C<8㴎i[ak`;g5+uUՍW{ޫ.Ŀ8/ƽ~sy7,Fv|øaxWYbAl sd y3]X"p-d+$ J(!UdizJ0 Sٱh7Fv1+)ҽhNYf[ <*F졤peLnɉKQ))WZ) Qn c$ZHL(Em18 s sypV>\D"F; ryY2`F`"_Z;gHzĭJ5" wC䓔Mx-=oXepr>3n,Z{:)%:ؤR\S"12M]<* >Gxav@ŇNxx:3F3A),$m@x 9"Zd(&7cEPHGq!B:̺ jP:z$mPiƤ%}d^EIEo .ښa$)QN@̀RȜ5A%\sXJM"eH2CZW*1!nBu}b٫iutb}a)?3W.}.} {_[F h_e:~\/>}_|+9Z~j0 lF-Yj/'fH8(;`j4I5.W{p7(hx7M97%7ZOHt6TOY/nVi4B[ Nwc׵I|:`޵0 efP+HyuAVoOCFPX(&:e& ;:)?rg)|/+ rY|7ϯVV^hy6AI/qusbGWY|uYU~Ӵ$/T>x!o" )m%ޢG4?[x5ghqj/e9QLwZ/ߟJs0]bZu8@E]eu+g[ 5h,@ͽz';g_zMRܼ0O 05\o+m^&A(UM]TjfYNfs9&]Gǹ4,"G)ˍD5LnL6N%@$ǢOr#mՄ$`OWIolVJ@I 1j۝Gt:0віMYYs2ņfHg3b#+ EwE29.8 $zA7Fp !H\.߷|ߛ*H))&녡!Q9 J$ ,CeG<1'hCBM1uǷDHP:5CyZ3hLdm2,D:#Z?&M`|Xa$atuIU !@j5rz\kJ=!&jsќnaLDXDSBܙ!$D$t.&7F+B ݤ'5+Y+{8y{7ǝ+7Em^Zb4Śت&VIJvzWеH"Ypu-)./c>NZS$IqٓlsNs?Z$ĵW,PExZP*u"P/5@%Z *d Tkv4RSL1.5.'ZZFb<Tl %\ޠX>t͊E87kn߮#k-n}1G&;/[GwޞuUuy%S'0ʷXx.$2lK0S2+t %E+,.+#T%=C+8q-*C+妋 z + ۤ]eq-*C+ɦ N\=Gqeܟ=$|;P.'wB~Excq:׏fgQR\dSP/\~𼹤.l*+r/Fv|%8u]$KYc1;9Ol@@~_OTƵGM`%wL,8 QJ5"VȆuEzkZmeV}Ht>_aфHEL82\n%|- d,cɡOH\ *8ߞք2.2;q ŕZm elJ-*C u9+g ڿGCpxR׷aB`~lbyeІ;J2I@Oxż bIJ# XROb\y RhlH\VJϚs?x'gaS-".miUer 7 4&CH2RGd9$D)+2%H "gITVYOiC~=*>%J<^DBɄ&K6M.RA$:k(I%),8PG'!z .`یūW;E;ر' i`Zr1v>**N3STke547c * Ir,ںVIyh䀉:qI@'A)ZPj2=ZIٻ6dUg7@F`\8YpDT&,: Iycy;#F 954pƍX nx2=i=ynG,e Ƥ-b4R\0#q0SE5;E/y:< 7LtrHy!8XrI >׬ <7j&ngs[:6~7{LtEE&(H`h7 z[N`pItz|yW|Y\JxN81ȸZ8E)F^*F'4H.Fljk緯ّs  x_mg0L5Rmhr"`9a33\h%GgYl^'B/QgyW{J*j m,En E}qYֲm)=۩c\ecޅ6WZ1R=/-ҭ6un{ari& .wO_enSRD]peYAI C!fH ^ wE%| * `WZB'udR6; !9>rF%e%t9ʲO'|vU鶽>8`M$Bgۍ/8fn{piyk=*2YͰӌRZ5Z'Eg=Q Қ|d;@:!rFA `hnhFDoAs#=b~LÄ@{h'y p`g{ūAF!-]df%:f ߦtRt? 73u< sT땧ͫա|ӱ}c⾝f^<R+3zb|2V;r Y$(~Ԉaq0qvvr^n!Pl8X^rÜ9Q [cI"|!to5>qNœ<r 8M(rneZG,Ew9lVJHȴ[{ķ/[%Zmn>{t7q_?t@ح=[$\vf'~<(,zu27JTQ! <@̢G~,zV̖bw=!tdHUX-z n`&ye4zl5ͭ[#ᶳe77"L'ezT'I]^oG~UmLnnI˜:pK *}K׵߽N]\[W&qj!ivt{GU4࿋ mRmSjan٘y#r.viyos7Y˩V QV܄5=ly]y `J`xϰ Y-7>jp)_8pSO0y j.GoШf)Ca l]L 3Kr uKTjƥ-ū Y"p p!wD}ܻi.Cy,6A$9h\'եӯKuUw~^ F͋jfmj2**xdo!<:WS3hOp5fl)M$azzЋ.`RS7*"&<ѿd!NM2Ubfv%V"Y<#_ >]Nfs\XrO{0y!#$aMӸ`??k9KSei[a^aŶTa^JC$1Yu2$Uis647{j]M}D6`#n玍p6)S'+bU-PrfaH;+7FlzD<\mBNt<|dMͫ`ۻ DgxpRc;9t! b <* s~6tY iI(J1,,VJ\tܪzCtsr'rԩH.H(" #88>jKH*FfdT3;a碇zEYz*U1.tX/~ 7pPL{P+0!UI*cF(Ir yIO5(R:" b{rײψ"88fAb |*DQN& =r4 ygcZadPg($^2#YobA% 9Ijbduc-ўzN9퐜9L%O»IJ7m%}cwf09^3"ُ%MUT/5~f]u9~}vnFǐj a+U_8W@ي!ErB#E`]Ktt&AaLQ ;>* /P(T85!ErTet5i0c,>DZgVKVVKS E^7Rxzqyx)ǚHt: lhBKf6 @OMFUQCLuj}6CtZyWxs5=/]f7'#Fx6 `jm킓(G?]MåwFBm#:R[7E0}ZY$>ƥ|Lx4n9L7=o86G`UȚtrV_h܅/E"FL1#}OxFhMnwkZ>1($ M?mCp{GJQ&ߴg/o0et1f44 =~j/(!)Z:+@ *4[L06ޠwH#տ>F#.D2_BIϴMLK20#|d`Pl@azu':E:%%Tpcg26H⮛hlCQOuTcÿRAc֘NQk0` 6sz'\*8 @$ZK3MX4BZ @QG|3`wˋ rAQ fک)i\i+#GKL )&s(joFI*N/LJi/j$rDZtZsЌru@wx8EhGLA `P2yWriE,bXHB{ UOH *ԃ '"2Q0UlH%1QMKc5u3_=֐cSQ FzB@RL &QRĔ"4ls_Io'B lFL k}.,za!!ŏ1}<;nn|cJdIRGvVXGO.w8i$D%)#Y^lnCݫ kWck}eD]a,3$~{. TeUWf}yL$1@1(nVE-Ouz/k _"nl᭛kPiVE[Og@nlV:)@ u& ,wEd;cԘ5 R`2* 5%IRx-2Uvm9x- x<MS6lVnMT^3״^Rѹ7ƷzUёK)O4(D*S Ad颱CV5}Qmtt 8$.ъ+WA6+ACް@ՁF!$ RqJ=.  9 D (b^dTi"uFhU *] `e?}\'8[TڭWn>\+zm=xY^` `N8"><|wRUO l&d h!t\\6YnȾ,9څ4>:A˳z ?5ǵc&uM.Տi\Ŀ.Bu"e_?a1N3(Eq,ѳwW m6 G-Z2mqݎ2./aXS3qv6~d-`hinn=ZKj Jp8AM| ]H^H(O;u҅?]HJ'ɜ9Rß(=fH وJ=>X2D! D s`hR$L΃$0CnvQtDZpeX~2]A_ʹ-#-vEb&[Heh|uPMkD,GM)dZJҸDrzHl#&EL ;{>#C QZ @Rp Iiʃ,eFry#jgj*WPy[B4_EV.Jbug o=iLjqrhb\Rj-;?WrpCh8'l<?u˱M+&}`X\.sR/ƣ(6E gz; {7vQ>TF)KA&UZOǚ/kv-d|8bEVW&C >ؙg;_D]\T4JQ$WQ!DenA;, ($W+| jAg֢N#匃O32XGv1崀t187$Ih-!6BVs`Akɓ0]5TX \h0h2s4;A]$:'qkۗ;\Ϭ,Bǥ %*^+FIm}c2>#b^ "m`'YWHBJr#r$$!Bd T:EK MD gm(3RԖoIᵤvLuI֟3Y9eF^ 7?O3^wgr8AIbƮ20[r7ů?)N0|:.8-,V NbE,I6b}jPIQM q7$~<7(U<8j=!٘DB0'7\ qS88f)UosȆ 3]*.l%D'b .ͥ@z shfJ̇6\>qcn Ƅ~[\StQ?^gTK}Uuish$F&ZE̹8i=sd$*O*C. AH[O̴$ۺaHbyLpT'֑? WYQF(AM7;< ЄZtV$u`"{T'I^^Q.xڵ{e~{Eh!(N[&B T@.KuB&ǢOBUک J<\3<ؠ΁Nb4Hq:A$ h"Pc ՎÙ/<@Fa4p%~t\dB%k邓%K=ށU!ww};m^ rQPBRR.D9)0(V1=/kꪗH|$Vj<5Hpvg O1H'.b,d:`{ծ~Km̕Igʼnڨ"٨h%*1qoq[l'=6gㅲMlXyh/[dkt~`\`v}l;orLmOi+:UO|Jەmƍz1Uiq{OmHA|v4yէoMEBkr{N~]r~>V])y:7.BR[BHhp& RKM]Mb%:my;*I 2@-[,ntUux!r iM BVPqb*I"]Ia" s+@Ax;lX̠}|qq7\AfOL<`R$fZGbL$u6j#ۍikLw YR@4(yȪZ[ {(}=Oz %hYy&POƋdLfedW xS4PnfmdߌgddUf 5ʁSVrष{tA״THsYH!>w{=F9(ntW$ևo]N:l]Scyp).wyg/J߾0f^bB ^pG\_a̗r{}k{%3Z!蘆,Tt4X plZpA $jz=b<;`.@L݄gS- k0Ҩ]jnSԯ*CܸWhCz[e) CaKkaս3U:lw2_/qx&_6˾p/vLG79 yalV3A峙<7mI[ Ytr+-f!c^c֢DjD+ԍk^oT8~8+i&>╿/W93t1>4V 8@v3J0=\Kw8?ó5lNTQ9՝Ƹ+HYy n8EUSn*J::ˣw* /;ǍhxoY`hFm'y:-&;*ޛaoTS ٸNzK}Ww~:`&w{m)t5jx/}^\MjBaŤVCqw;NI-: >OoAE#qOC}#WApcG=sB3*P3rkOH%d\邰.E%f`*CV2pg2^JS; "ufy[;dxm&89Y];Btpk[/D>t*rLpVb~c2 נ3F(嘗o]=q@@A& <)̨SoM)z`༷ 1iVg&rd.{g+j-#`5kgs<AF<*U)ۇ_e&qeq{1Ss^SmʚOnZmmImjfzk x+q끝"\ׂK siPM7m}4 6T#:Ȣt2٩o4>*|~fU+ߞv9h?; CtL'Fg@8&`d2#sx0u(\WmJ 8o>P[Y,XMڭ͗Ϫxe V^oKiaN"xa geTfrʩ9զyS1II%+N @$4yX/dbnJ|2ߪCy&(5>?.~ Iw5ec;M^~᲻Nzƽ{9zFlE$);UU&*.:棊[ZbA6yU?qȳY¾Ҹ0}罙<̀>f~.>1P˭eTNo{lޏĜ[gǿobZ[+Yhkdmʰ]Ss8 Z6` *+:ZP)EQ De ￸BW\iE\jwqUWoP\IXZ :qUȕPU}WH%N\Eqp5 ,T Qow?ǿd.&hT~ÑƂ Z'4. *, QoAÑbYHVRj};GC06:+R x-)2M) heY~`p9z} X2. ÝWjpqs'#1'M໣_5u ?»gO3xI%akal0|xH!m Rۈu*O@I ƻgW{RVKb;-eP+Q򔓃ltI(w As׆T1YBAb(1Bnգ{ka/Lu'>re>߹s0\֜ |ͱs^^ B<]NEjޛ йEIr;d]J$$4hyhCBJLpV3)w.ǀԝ: -eSquY,IfIZv]Ҥ,|9?XҤͶۖ!MBHhp& 29ԗS>1ҒYdN`c$1Id:P@}:x<ΐ J&!CtR+eBа$霙,KZEj WV[WI;ȆT }|q 'w jmymwHl9c@Y+H긇j#ۍikLw YR@4(yȪZIM |SS۷Ԕ'5hiȐQ_R朅 gk-"&#K;edAEN;,xQm=ס% n>iZD˵lLxudr5h ÉYݯ}v6wytIk#jQA3$2?BeQ{Ycd8&'͌94gSd0[Zf杲`pd"J_/PgƉ1%.\$ qnSd*YaANYɈH"8>{M5 9 \Ps̈?}h'g?kDd P3B-!/Ql,NuJ窎ӛ_kzM% v? (- O^Wrٞif(L=]t+E((Yn&>VBH࡚Nƃ5QO ꝫkͬҰ^I9/aϷxRGv%vLZ㳮\Ycpf"[lF<@!{{v҉IGZi"uʡVIJŸC:/խ`!Ƚ;_4SMŵcp5 ]ٯv1o]S촗h }<*yL.5xO{qQLݺK;4)y >vw~q7f5bC^i3\Lee|:iw˸{f=Krُm#j3 6l:!s]v2Bv`Ǹ\w,+;?N* U'WP‚7bZ ,qU$NztNqr:J98q.@.76 L$xZĜS)OTu:Ku?Ku߶n{HG_'B7Ç݌:_@|]oTDWC$UeA$RZDۑbM^UUB+l/rH8WQ4NScp@ :g(@,J@mrpk1h$VvOM;,m3jr$_ow#yr@ۅ{pGF'K6WlsFpҬ)Tv+&()|VTWB:穮  ,fň5I;S8mk`uЎr> Ui.V}5 ! [d{5DkHe G2ݵo9F*l"r:V+/0 Zw:JNLm8&10L>&.Nt+$ OA葵MBJWR7k 7&-93$, L.zc-t"_}u4#HRp L4 $Y+J9k(%;XJM"e'He頸Rὸ3&ޑg\gكW/z翖\!݋$J/W}RA7U =O.2zC~_bq7QUgg@5hoM^ Nco͐uvBh& ()Ř\|VxQ/l]k7/TͳZh*yDgcc ̒pfOpɧT~\/#bYܡ?zFk_molgJd6bW3L {w9_~TQ:h\Fᒋ8R&<'%@}{fto>3WݿO~=fz|~=$F!tI̵A|o'Qo8z3 AtJ 6tݶ k_F@26hp.S|4.6[ݟn{9rl_umbR}yu  d&>O}S(ڸ7wI9&7?? O>o??(ӟ?'Οp6!m$@֬ҸbidbxO!d4@]@dLbRr2ch. M;Fe{yCEB=bRi(XV[\.it*Ř[-/En ވ^C'hl峳7m٦j+:ogr<9V-7_ȷk񽀡<qR&Y)*"b;C8SaKge{tլ͢[CgYMZu8-j&>OQL.|f]K-\hR\gNp0ޓ5qWb%>XRlWI*<$jNég `AANOOӭL.O5HPJĨsFV۠Ւ)hNcAy= ^ܫH偃F!׀j8d,3toI_AJHr3osA#SIW ayΠF~v6<+ { n+co~ <"jɄѿ/lJE* Ӓik Cm//?,8^;8 JKCP\ѠL:K(#5ࣸ.R<)fk^g7$N_`EcT`PJY'o.a@Dd%b4{UĸCo~BvhISmD3m<[a4ԋIE]Ive)Z#'QnӋ*diWjQio^^tpcJ=,V+oD7\IJ "umnx }<`s<<0+ʫ/ ]aڠ>W2@~?ˊjZ!C37];)"NپΪvOXw~u֬ }U&²3AaQh.M YR~ɶ<Eo'[3ٳӖڃV5uߵRcUM]af^ry3o {0t+%\Nl n&-[ mn[I>/*F*ՂjD[5GG߃g ~JQGӤ"ر4hRޤ"AItפiR=?ELSA.aw)}i/9vւ$g\Kr1u\4yE+Qs 4=)bM(]|<j7Sͱ_=|%_]fV5"|H("}ߪ+W \IE\%h>tq ܉(f _a4I]\[v],~7|Yy`$',CsDh8)>7qU"T;qĕ/R M~}ӳ)q _'gX{t~yPAΘQRE0D%oJ*VN{o^oW ʰ& 22M#={#}r)C"ABHSٝ4Z"vG'T~0Gقbo_oe_i"kCs W`F,V'Ye .#Ş`9<"N4q)% 9UiigBKpaLR3^@j?dR%,z sZD)`Eln%9Y0("$#Kc`΂ 1ZxL`)e r'T#>"=[iqv !rFyi)1Ho(:A~Qah4FYa$3VUH*D50ةQ*48IL)h5$"bC73jM3f JufpvY ng7fpv38 nMiID& fpv38 ng7fpv38[0X2jʳ:BYW#t,g|g J?3IIn+ 2ӢI%%M}/j-(+jRE7GiCđZҳm*"(fz::|mLdsp3ב  %1dNF03,*͸Jb,#F M ( ܢ.vSr2Ŭk3kGkw@:DEIUL1Ʊ Lg fabM:*:qtX #%w,qс.I mD`l.FN=M&cvSƁ Vpv;!6?K }OW  N[lB%*Ԁ+[ OOeU,qͺ8d7"ű=krQ5ґZTl[LZ,9{S O֊﷠'@} }gfTy HitV>}4}g{/n'W.}`ρ6{]^zJ( N\0x[ \+]-Cv*a0,Ӱp2Cxf7yhs~s=luU!7N0㑴0pT4 l$6)Άe._,RKeAuz7W@߁8zϯ߯_/o~~?^ 8&崹: }7[ߚ/]X֛- Ҳ]O`]kֽ>TȅJ^2܂(}v>r1{A^hE trg?ͯ31T^K|ȨȻH}P今&bE?juApտx?kMlVxw((qhQTNˌ`yEA.( &3Ät)I@´b[ƣ%E&SanC;L/LJib(JhEme)q!b! >Za.GGdTaQj~CL)!(1rSҊկK!imtn0Y5b:sr9T ͇r-sewr/'#J6 Lxɀ8Shp n#mYo`{?_;}Km?]:[3Yfr쌗ƋBa76s]Yo#9+B=Ov^ 4am4x ]dT-oPJْ6eK6a2)I1_0" Њ/4T4> Y',E&s'smM.uȊu>ɣ|Or˲{ɺ('9NNAѮǛmBW7B}R4fa",@9 T Xbn1QAoNPχ@=/bs0g*-[iC<[!s*Ul*u\ h%W^R_VIs4mMxW0RV viABy)Ӝ CSfoeɲ z'%|Or71IEpEcPZ\AA1f}Z.]"noM_v2ui4\8u4G!) d nn%Ȑ^zGNfnte"'Iߎݶ7kLJ7:߬^U~C7Kj=AM'VӚv@i]Gi_^ʗ;pH"& GK1dE2:0Ygc5>\'k5xsR\H0H)/.PA=wڐ~цKÅ֠aon&tm7Zfn Ҋ1Ï(}3|hn3S W7CyOSq,7:`:Vjܫۑ؉8S#pPt5#F[twB"[!;ϣTQG+ E! "zKnR”2-w)EO&w qYd.s=:NymЕ @"Յn>Čl}tty2Axֶ?;7Y`,ft‡w=+jw wmZ3欲5Nhd.~ӟhr`4Knzw4NFK{6RˎZvj=];y0weYdƃ|x랎#!Vz7\>[}}״%\_7װ5v(FLͦvwߎVί7B]Q*Ϫd3&;k$v&ȣBxkL \+Kg?sX 4Ms&D%f`q8G@l@CՃ˨5H@PR0h3N md:0|pPm8L.r.ॳ;`ʞS9BO6ON,Xe83ܢfBZ))GQ F{I}8dIBVeE UU G q~@mo`|~V)!倃EnK9;#*N *,fn .4 ) &'IFH D(42MQ8{hxJδ5d Gt0pA&NDR)' ͛W,<}2B?uhEf5CC$PT۲9|R %^o чw5n\sp2c1^` Y@1 ƣbު>ҧFYSp>lqظEl Ue1@FWA\o]9@O\ד^6ᑇԇ.NLAuFma}htL`eO^;u )|.\"YG޽GPC81 g&OJ,#^h,lNdt9Z&L` CÉUp5+UpP|s}JI-KQ~ЩXWH @3J;!Rdxvћ(CivsB2dO˫52 fo(9z絕q3Nz`(D=-jsFk[ǀG3@*Uk]qY!(}S:El4dF0tY$[ qR<,ur)XϚcqȓ2hP1nB -G.G&Qzp#&2 VPKa[1ND>=-G`}:dW^1rE)J^I|L"mA[r0AdQ+",m/:_7ףw <~h%mF[iٴtLsVݴ|I,;5g' serRc&.{Y.Y10wTnIrY։{cړըu0tE',3< :I൧1TŝGceǝ5 }JvgMR_>{Ҳ^y >܈s#Ӡc\c]6@ց &DeGy섲 綁Ǝ>rz=zxǕ<ԦCQzt} s-FB`L@<ͳ E2J$#e횬h͙%` 的"T^+^^ń"!lG7P`6".&Je˝FQrE/HmB9܇h%K2Rf:@eߑe o ÑWr d2$Ld$lдג҅B " ɓA!aĚ5{G2,gL H6\F"A I'͍L6EFV2刹IZ&iǔ#EŲgq@}fv[,X!kx׃X~Q˻edGLFӅn>`o,2Tn&8W7%p@m]ӀVfr7d#|V9tk^BWrcz Cd^Z5rRj1Z'N(p_D>E-PDX*/,;G=7ARy0wз_kwdvV͏ҽԅb\>ه7P\.9ʼn1N6ΠNMKvpW}V,v\@0Y#3!E*U3dH7O>EI1+|Ġמ BId&9d@R3YqθU#`)JFwz>Šlk_ 0a*{SN̓d<eg2$W^!S6\ X#SH*WՑ?)̍a%&!D$$_ Ɉq(Ff):g<}MlU=M73GAFzBocVi@Au,%/ oEղ0,bF?TK!(8 א0HIҹ8mf"Ε> J~'oR_Mw=B_hq0 õR %^o чw5n\sp2c1^` Y@1 ƣb*Q٥ʟYUSp>lqظEl r6}ڠ3C7@= =.ͨox!!˸-tPwQxxp7],><!~?0xœN]0G Nŗb+Ĕ3( YR`ZFcofs Qd ޠdh8 N_S?fý xN8[HI-KQ~ЩT82&/zJRȂJ!!Goҫ eW[$^ A /(YU:G{Zt+eT3ze.m.|Ik7LBҠ @7\sB(HY.X+P%'[֦4 oN~+ HRevPvNs& JNņt^tV󃟕9f#ggm>GaPjɡxh!ylC!OdIOyiʉ?N9? x|굏~)-L9XQp0." Ik[Ex{pdw>O3OrIZ}b cٻ6$Ugo/!d& bB?%mgH8M5ez kz~*f~'ǃY~⤆H0:5Ӛ`u& :eL@OZ( RmV.rܒYkd 0.dmlMd.e 빱1$SIIψO~3YҸ9R+0?\L=FY(W=2YL`$am1s Ƈa |8up|<+(D0RemhA6Tڎڟ;mxQacyy0}6 F~zù[Y95.< DQ"Qn{v;n-?v}Vz0wHקKɗm^uVDy%'o>@B(\$ " YK ivU‚ϖQjp/[%Q#(-Gp7qF{щ"6Rs?IUvSCRlXeէS&΁lrl\^/JhmN59EO[NSS/fX!JظT?l,T9]PJ{/MrI3oDp cKA (!.+V)¢c(z׹ bHƱ`c)U1H*s76*-sd܌R,,2v,,{eev2vd'EN~6vO~{Am`: $vX lYc,K4.d`fBʉ6`d1t 얳I`0Q Kw:#ra4< )r,21dB<{8@Zc3Ψ>쌳kRZY]Y}OY[Ov>@03[YURsjx!#'.)F\ii\FLDeVt (NQ#eœ0YV;E!"Qh; M!=(b.FΖqu~@Cu!+ S25f@4\J"$I1sr%*-&K9Ҩ'iIrM*@ .%5Չ(#JAsIp𕵎[o E)q^U"k{o~F |y<;8^=TSt&;[]C.+%,mD=qsn&'0*?L⿍̛7׫.1 Kb( grs=+Nj4x9k:t巜aBFbs$G:u#s6ѸO8yW^986=~8F6xȮQU1_4+Go}98UYl#ZۀG}S妛J\lo6+?8'G'߽woNoN e__} j? ̢ύHP'ᗇ@֬8bhԥ J>|·}qZ%6/Un@~(|;xG;rZkE<Ȁ+Q#H6yLFݦJ(ryhDfr f#w%᡹O= Ȍ2V)U4 EMt!JbgmEx@\xe؟;;=a<+R?)=oykIB mFh!jV$\ʓpQ\A螑|V#0d fKjI Zyc "*"XhKhqvyq(w7&ݸk!a.6~wpzʹkS O*ԵfQ?x(LQl!ZYGYgYBQ%GT5vO7;,|!3PNYnΛL 'Լh;*q*Q"9}hQLI2%(roEcC•q`DP X@1Y1r;1~.Á:Д̃6B'(;"^ @(NI*oī!H\>`]spzahH{$Ds>Tlq͜ŐTZPo{ Gbҹk]FӚܕF#fZ%ka1$UQTWyHM*gUX_dJ[TNmH0b̓yxlpZlxYW7m-tAnN&x֟z[ [ǃQ->11+Lj)Qۨ u_ZKB oз{Fh(6e{LIS& ˌ&vW++UF2e}ᯫpn6|1|q%EGxf}쬽fH#㕍>vipa˝Gsi1u(*kd4Jcl6Fhښ"uI{ST)Ƅ"S%Oli})r8k5<RA%6qD($Rss^aBѬ26T\\6z;tg؍.b hu๞oosKNZ}r=ץ_ahjX}8P,^G;BlvUOg ~vշ柣QǶL6Cw߇ɾ<-J}oE{U=ZTm!$u:w·mj-@ L}y)6Oh/IORJ\{Œў\Ԏ"Y'i9"$`TѠI@hiG#u o7?h0 8n="C/l&U[. +ӹν4RU;\ YE!(dUUS YEI_* YE!(dU\V&jm n]sR?M Gǻ8|.iҩ-;{lOEEtw.BϷ†Q0 Fa(l a6†Q0 Fz(lGdDQ6&Y6 F(tnmBQ6*[SavJadB{rw<Оzra?(ĜL,l+NhF!*FU:Y)'́$IcV_]uNANv>ERgaRR4 .#i$-˱"1]|[_:cY$OH9Ҩ'iI2& Di hً=}I~OK̬TP3ڙ~ю,O{ds8l$ ٵӈrfxzwg 3$ZUqzBd%JytlnyYʥbO_6б8CjxZ%_˼*j2do3S)lr^á3%%,698Z¨<6SgmLҝilp?kS?O~||p vFfEgnn?ןsrnWD巘^4tcK o馩܌fVGܴ!|̳OAw7Mwipyplnu>ȦVƪTo|. F>W};AdqNhg0w_+\nT紲nϸ˿sտ_>2}u˫|O83mJS˻u@o7Y[Mc{,ݥigC!7`pGbvȭ7:~펧!aԜfbNfϪ{,O>j٧|3^ȑf|P_G&ᡮ&sm`z5f:x}}Z5HCItJ0AρOP_=&_ C쬭:#њ s~dǟ _J[c3@-@. I8(.нed.z8F+xڠ9|nH!SXG%/Uc$%fBL|E(UNE2$ЖQ&\@r5 oTHkkb(Bv6:%rg+'* x k718j>޳ ٲǶOeLC7jٴBN4%}N~F#XfYPi1ds{:̎<0=g؞T՗k*dғ^d'B1!jAH։@s^`XTOX0U,3jQAԎFx:E DK VH ]ZYrr|\fɨnxC< S8> Wj^;?̧md>]̮WWPJ4`͸B%6.X(eAQq }IrYpt礗m!4gFbRV # `(j k* d \B*T 쁻{BE#,OۈnFY}_9Aj[ ,ڇ=VZcAGAG tp!Aj2I&(RUEV'2J@f )QIoB.㸭=xR.h_,|띘\QZ Ձ UJӽAWTL8J+W3[IJߢ@7׶7W7%)ESKˣQ P ,DÂe4;NnD$.}b"zL(IaMb%}e{SknoB8W"|q< 8d,ba ߼h̾EGXp4~i-J*9Lee.n: ʀ#y[ټ3ğFSa_E\aZ65%,x#+5W9XE~wh5E9VRII %! xz9Bɬ52mBąTQKsa=76p$2A3⩣Fkb58SlF@HC8j+s+]8ߟO4s$w}}Wkk0wXU'5{k^*JJwJƴM qϧ?*?P-heTDM IիjNĢ)By94CenA<@ 㖠ڪ`Pr4p$W\8H<4**Fi&ruPHY"!(O'ce옜zYn&W!z[nߎ)}x];x5XמPHo$OM;5|]gV`g얪gUNΞɍ]:+q5Yo\ua7OY%m(` Kϥit[*]BugKVXݫ4j^^LY١敒h0P0wYwcmX"[T,ws2 L9讌 Pe״yQ'L6'ϤC6*jnŔx SZ%:_ 324%j VV@`^s鰣˥C^3?KF+)A fO#64h^LԉK: J R y2`)kLHڅhp`2Qf$/t3<^9_bwS07TBhg#-{y0uζ`Vp][&h'O<a_o<]&3 znGoyO^;8;*r5E|t.FiREI:'O<(u"Sɚ G%wJF4@[#o^%^ @?N.9ʶY#2劂GR!ۨPy6ZkEu5 _cǒXc luE}3rjm:{RzS%/f^+kfg=ۏR?C/%sJ}C8QXDx'KGW6~XUi>ZMʈ-w`rQG(h K2]"pLYӂF|]Ḷ2 zn_# XB`C٤8`7ߑխ3l=uRt鮜p<6L&۫yT|VE Q-P M ;Z҇nKuܭ,EAdA{kFNGxg[(OLqTV2S6H\sr8*eh}^Ԇ Q<.v#%ǒ&A*E `̵kW%T0Ĩ`58 ѷ{y( tTHG4Ϳh>}\?{ȍ6/E8@l@sžl0֎,998}KKq1nlVWW"pU0 mjlftrўRTVHOv $kw V\S|>(dU+Lݖ|4C6{Nz<"X ʘEZ1pJ$źQ1f*GJ#'HsS֋RĜ5ץY$jKjlQʳj;cWY*BG/vY)\/>=Wp9v/ƍF+UyD(,#> )D ^ KdhRf` 얳 ^|MeRw.x,="L !f%U$8 &JIY *jpec=}EaRHN#uk!>z,bDb$ Piˏ?}jpY-mU`EWdΜiDk }eO%xH119NPϝIcYm3,U,7Wdr\ X82c_m淙 T? %A+K>^"(g33| rK?Y`:pg^n1a=;Vm5iaVztڏ2_[N -YURV9Qg#ҠdI2N8CQbP(cK $a;,G ItLBYC0s,]F*[\ٲ^hܚם̭edR-~R!9j=E]9mOgk1{Ow9V-sdDV:"TyOgFM}R" KI1MW@oM;]*{* ^q_ N8қ|shk "~ $>eH?#g48:gG'M̏$ώ6kߨYr@* %R)iV&0\vInum̢fRuj 6.H?X~㛷\}ߓe H0'#@zS55ͷt6SO=_-Uc;#VXR $_~NC)Z7Q֚d *zq"l6Zv*eص g!(~/Rt6 v.%%5ڂkh**n㵂՚Ii-p^?@oܟ3xAJ@F2U!Jv:/7JO/۠Yne֐~ۨ3 -$dZˬ<\FTo=gd!?wdϻaY! d%h4՜[ɬ&1:nm">U%WG(U;v{YM*e CP{7~wfz蒸w%oHaz\kТq絲 ֘xv BqxIԉLWvϳe_TH=;"BBIoKU0Fl Zo2)d0%W}epFxAΔl(Y)!2-gDHW}٢ܼ:jOsGy]\[BIK':{("Y `[1j R>Cb]spqey\IfMqRΘBZ9W2Fg ۮ*TLĜ2XN$Ox)L!O@V貊8oU?SR3_kK1 0[Lޙh҃+*h I[RN.fB$@xEnm-;zw3P|/'L.sv-ˍ)f޲|&R<Ʌ`2kL&j-2P)/eQ+#lmJ0L SKBr 37ɓLv kw|=kdM'|y27,lv벛GpuUfOs- ;Ex:[v3pz(gA>bϊ79rp|Bk^Uet685lYz6UgX}fRA4 }[kCe`HTȍu SOAd$EsEK<|'dkP+*w*MSOԖSV38s'pin"O@Ҷ7pikt`~ÜH+b+}F4i.龢1?Fr46-r+黷SQ^sn<8hRh58me.;cž_+Dm(뮲 -dA="NFiZ+dmN;}R$U8W6Y(oB]w7:tX% )52Mәb*Й+2bi/̱ZunWN4duY77 /*BPPͬIqnkX̱̜p*ql@0543 # =- bw~Udrb`ʈrh *;"/Q sfhMոNy>rl >}`8m`)j+]Eu&(:ߟ_'%=29gJ_ 癠 gP0`W AhQiZ<^ӎ{zwۓzfG^xXr!-#K_b&s#w;+U^/teX-Ke{H_\vi?/fz&[ԛ׷pٳÑx5LgM!vcv̂jvs-ГvLk[iqʹsW˨pzDdп twCw_nfws<0}4Wlf[v̓ϻ1m1ʝCi=z73o']Bnx^|ݽ38VzMK\yCqL2nEk|bjJ4qE{\yx 噁S[Up@ IDq_ш16[xεSx񯡅"KC~e`V51D VJΓh% IJBD{QE"ȓn˴Am@;^|p[:ë= ,R0_A>lH3{>6h똁R: ܛW#s ^(hZFTsj2'BNy\ie4\> l6%؟5CWl?|a X` od0ť~`kjuݳX)@hktS] H.+D1bM)OZ0;-8`[ؾ 7֝a"DC@F ! [!ZCP!05+kIq8$LiykN`3~4$8hx$hJGlTk\y RhlH)Ի] {|Υ/֮M-z 5>Zv Y= vRd3S2 UR$UJNrGu:=ZbjYL%IFTJf^YDȨY48> N:\2"tDiwϔ둈Q+F<7p į$8'$GK1D1=[azlJTUm`iƋ c+H0em unW}yQto$+ki\uf>K㉞[e }%^'[Y L]Im]p8-TtvI%PHDxMsW%),8K'!!!2DHG4ςO!Z橏Pok)E/lg\?r}cg/6O g$*ξQsʻס^qhvtEB%Op JiI:}!(BXRB*"eQjBH-,.:p 'zGd 6X2'R`Tid,&zd| ͌mPBbSnΒ%o ڥaqkGG~#6)Q"9a9?WE7M*CKA :ĢdZ h!;8˰ɥQqMb`"|L+]8 aθ<.6;Em٢v`Ka JHN,*@6 hz@yXdicx!+xwսޒjuձE3"!pjT,eEry QFIhS@q;衄8i5մVi'a{P55&ujZkQg IJrV;|ͻܙ&cI1OÐG ~6/t/2#觛]q? wX 5fwW3WکnR;)GqzQK& Gt\Ƶի83?jĽMjWECm x! m]{YdE;Y|3u{wY"q-Avw~\}Q/ϛ;?@;@Mv&8 t}gw3EKgv.Wl={<DIzwϥW=bZ$!p4q)=,\=MZ0I)qz\za  U-9Bi5c,%^!\1%IUXUWS+~pWF}mGq!YܾmrUo/;o}I F^pE$\sԈo ɣ{|GTUTw>'pNnPTl[-;ż_ꆒzwFlSϸ7'dŁ qUtwu*X~ݩJۗ ڥ]gjjhЋ65}R^ia^^>akVr0]i)'[]m?y!g_]_5!Sh6zF]橐*0*HWVWEGX"ty6ϡ_p0)w+]ڦ3vk; `BF; $Rq]ؐЦ [ZFǔڰ0gQΜ+EW ip88~$k"3u`DY@Th;GMT;K-ln! UBS@!nSQ07hy7Crq,{q>ઝǚQCtwoj0.ԉL1;w/ohw&#'lZP-51Q'Kl1,ٜi_D^~E&wǝ{ uFQ˫ 5]D猠~"o̦S '7TϢyTϖBS%hMZOma2 "+Q֛sj{bZ;/ P3))WAO|~nXsz8z3lj4Qf;9=OK mg0f0 Tfpȱog)l7_f0h3u  \eqd*K;_yp4WW0PtԎ Op2YZ)PJy WCp+z:* ';\e)[v*Jl{zNݤT\(OKY i~7sf8|J.cJSmEx #њ ;[N}37wE){[4uA `pǿׯ\p|Qڳ ]yDIEPof-0fEBg(DŽ^8uZ΅N ăPF˜` ` w^ԱؚѨd&nљ;zn+2Zִcl=$!?T#ҊJhV*kdIrYc/#΢+_ura6qK|tI Λ2 D,sqy N9 ;#4H& ¨1M KrzX {΀[Or8Y6ԨU%dhyR՗HD,CfCu"K vxIu$TGy zl˶}E`QL"HORJ\{ŒўrCnu" ,kJ lbk9ՎFxF&--X y*84|D5`˚e9sY3ڻh\ׅ*ON~r-V"}<"6%1mb '&z_堊j#[K4M1Dq16(iF6%$IeĈfjd.bjO7omԁ,t-W %mO-ó |.H+!.ǩDe=eEH-" B&e,{SZDXDʴٸp)ka3G+[H 09OK֡nXC$Uo囨kM'6DB-MXoK>s&&v~XuS"9͊a+9iWuc˴Ĵm-X0ÖZOqI/=pimcYt:b{#hҪ:= %t(Wљ/Ͽƹ]C kw5 +-!ut:NQj'cK}p_Q'#|N1G$-)C*{2zrE;! U!>`(SBH՝{t57Y4$&ƜxQ69m6>,Y1kk:a ;:q⯅pT~CB@y6L17D2* i}dBE2bp BV2ۄy[Y$^CBj" L.iHɄ\$PP YoIHaK~S^}N/;X>zrT-gT͕^c s$ X+ MDIfyP)qCHe* xٗ4^3HoM@!W5@uNxA2F5A%:/ը3B yH )+Va0',90}ކ=<+h*6G֚(lgI6K-xee]ٷ.ƟY};ǥڳ,v>bw ܴn.';IbҖ#ۭ{y+8dQឧ^9lN;x_wm~K߿LjGoU`p I_|*\9>|ܸhFyJAP!dDf#2*Y} W|AQj%#C,8&8qz`ۻldkpA#.FS>bVE4r-`d@ &u-+Ϟ^W)04.ϟ)z38?5EhVjnPbOٷ{e76ƅ/pEFrݏ^' [*kJa|3@[)ij5Rz!%8Rz$H 6xNj.~izzK @bdIqϕ3(-PыiOSGݡCO"R'^BR܀:B&qtY?l#z~tP|ԑw $7ZmE_{O-tU 2zlF<:ʾvY^sݣu{X<RO(*&jE@$bvm10D^ۺOgKMLQ|̺8kߥL2BoTZ9 h)X+\l6v͟/e{q!R7'RZR?yvz_M]t7/-搬u1_9cq2z1Y}mm D6;»8[R*!xrEO.N('W ɀI4Ra!Z26vɸ,lFO,ԃ,{@eZ4,Xjz5 :~o\b+U q!R_ZI6,L,j(4q oC A*YŦ26K#ҧEMh]V.$)"QS`[08!f1hmF-TcfeaN,>Z(C'Y͖ꞱMx6yBe |1T(KeBtM6`Uf&V"F=b 빾8mഁio +˟7V3xJJVf,/SݢMPKKY RҋGnopxmuvuyݾI7/hq/JG~0{8n[]Ꞿ9 g)OJޅؒ:$/Qx WNyM TTBNER :~Xqr>p ڿ.AjgMfTWRaq #[$T{ST,v0{۹LM"h߯ރ; 'osרK_g0 3L!GO O&q:vF']BP.˂rԁ;Xz%1Id#c'ŽiP: KDBYyHG2e+f2/E@װzgaj VF9_gGY|]ۛg6 _'&vj:̇ǣ/4I&棥=NO?#\qV6[kZ>Z4&Fqn]ξ,?OƠp=S}4)/Ufo׌DG?.gsG&ɟQ # #;:Ll.x.Sl2ZmfhiqqRluȾQj7/oZ/Ggv}9Gg娊8&۩RfSi&GqtO,>}/?|_O}??w|w`# }Sϟ?7 oN[ g.Cۖ^nK(coĹ2;G7IMcZo~ͩ*'(~l~(S2rg׻Vܜ|4E?Ջ < _o%a]بF*M)h߻ɃW`Ik'GIp-bvxPER JK[V2hq8;^y蜖_ل=| DV*f$ :3B:M.uVi3xbxE}^L=j.o{=,z2=_,~n,J.'=hrJKP0ΒA0%^ɣG94'ؓ̓.$v3=[M>7 ? OnO] ٻBj#~ړ3'g d#c7Hl42κgq_ L֟_a6uaXoMGVe+~8UU׏~Q1r̄fڔ (C!RYJ{1rlf; nC(IJJ-\D#!ՠGbutFx-)>nD[kQJ6$ %ɱ@CF3.1nzĄ>n1qWtٚxrI'Қ´ҚW"m3^j42|ڄ,xx`X2qKg$mNk$ZkZFe4ޓ\CQ8'gѥh)*T hJ[b< V"¶PV[W[H<1ʳ"u7mLLC]~Ngbs@sXtT/):DTԣ-gT9_xp8 sh,]&W:-"F \_$mJה~ǣ\.6;vڪZ`"N3RBW>:,,:Lʌh(E#AR" Z&@!E\}XL$(A,Yż \&ss's=l|x2)ñǡXDU-b%5I N-(MyԂA6>`DRIQz2T֧=5jaCUӪA)g52D=PWDJ$ttcmTD`TңL9o|iR<ܿZl F- nK ۜg,͸w?#9.#{3cu^ș27t|CS~=h$Ӭ-+q#H9ϙV!F0b"' -DZjb(kyཎaKeobWPוWY! ޸*_4YAOh0vy+BLm`8ȭNEoߣцR.YqxᏋɾk)nvp]l+z!iNiZ/35Fz! iQ8Z`$*Ko`2s2UWW:zIҚ I! qU[3qo)*hrn%FskZwg+Y^פ'Fc#|Ӽ=A|Y(ehBz^>ɻ5MW{65O#@s(QR1HP=f~NZr`sNx'Y2p4kdpJFRT ]>~~>-؇BZz[unoaq>&NOzVYE=wt`vT2xQ",W7f)UYđDу$B#5:HtPYA7 ZVh\ ;ӇG5A:sډ@ iϒFA&\=)EY0j1}iWq1ln;p8l-{mz,g6?H>_q ϭa:T)$H:%d@wJWMq aa/!0jDEMsŴOLFXmSoՠ)`j$(tUάYj֜ITNΔt S) +lxZs|#қ)"~Up݆l;no6fn\\|2i OVH56!44 & G"6$t2g<}l4=6C! {礌KT)9nc h)T$1h)by2\w"[)q50rp8zgN$()U:iDMʼna!P"3A߶$s(}b> 'ԣ@QinR #ot Ls쵛qmA0Mc[Ȍ {LD`F%M0*x+fo⺣2CVϏ;=޼LOߏc_EA~Ve)\kedl60b~.ӶjlSڥԥU\?)Evn7tnxPwQ= K[MWWmW!^ߑ$.:$HqS ~/d/هkƗ_VRxIή>I^iYa:(BEh<_RiA t`SϏ9 t>߾ ZU[t.c ]z4{y"gzkynoǚd{f=}ߛ'V¯,Pp"ѰќDvދtmÔ1/&niE//ܗ-.=w,D /v _#7`x~}yQ^CB'3pA`@b^uQ^; jHbz`i3;* w=V96,L.yj"0o)%" 2w)mz]^g`ffWO~墮Jԕ<8|!˫0ϣ||{肸mrIGq:.GDSb(ˎ(K(K)I% Z(q?7 R %M.J!n4xK"((L;OYFM WN 'ݣk {SEޕ6r$Bev[RF^!޶g ,=1<[rS,R7HJDJZjV22+"8X'^Ӭ}jf'v rUX}^z{K'/`**G)vТ Dxj! 5tNw޾ٝlB:+Noy7 6EbX:޾y%{Ɵ_N+L?uWYVMBu$HTMA6F4VMoռUx'-&TqXo E{}ȊޓMQ[(;FN$16]&J!9ba- UFB_BZ9p}3J{JЇ^KD/Ur)sbԉP5Okp5nMh5s7h@ <ԣ}_H޸ˏ]|{s%O^s=?\k9}[xT63?|Ã}'հXuIpk.u?#m~Fz'ӏڡz'-':>  PQ%x'UrIJ‘b*KI\}I8RR(!^[s9ܠj\?:p6!DpI4:TPbA6J)Yq6Fw-VwZr6p6Z$MPiuT#4HalI(_KٓH'Nmp`p`pj8xw:fӏz8} bZ %%oNHRX|2y2%G #$5OEW^Oj} v*oڑfu|-u\mZ!*NKP\mb]mJezWtQ*N:uj*`{uՕ(brb%WҡJ*GWoQ]Ք*\%PU]U*%M+qjRjSOx|9ỿdΜn4 ?lg 3iC+iBFW/"58A U4c墇u ʷ~0fŬ8%A" JT LaMD@ :q+K5"ۗU .9x3aAӢS(VT-j*"$Hacq=EgyQ;㬈Թ=~f 12HV٘ڂ ˪0d4DQƘ Q1V' U5xN(;1Oӫ|i|ӵV%n-p2y;er,=s>ȫmHyէyn*|>>+՘? z:YԤϷ| >Vƃ}:=(YB֟3ť^eg˓-cu^k]kV~7?ծGCJzJ8 E$ZcȐY$1I̻X:T L`c=HR `Y( lS[^FY-`IQqXgl-S_Oއ~pɎâsj~Av7,F]Jc,NF2Y'd7) krjF밤TBx\b$]tS ɕ=d*`/֌Y3G) ;]uX^>.Sdz]CqWI.nYyaişp|25RJH{谺AFXuS}|#",u*Fgͮ⨐)(Jl0 6\qqo/.NQ_l+9d.Yjf"X #'?Yf'oXtV|/{iu5tsGCS#{M 45[ᥱ/C]ι_:gUbtvC{y@jl+>ofO)#6 bРA~>'8opvNpN^6Lv‹1;Hk:W]u(hOC `1XM 2Ǣ;mSEFQ$S&Qm9a r4a<'c9[͋^^Ӓ#nwܸY)9<"뫸'vubM'zۜ6B2) ZrMH$ r4Ȩ۔%L궵!*A3H).䷦krc;;٦, !7TPw,+eCKH ⣡,bEIT(o 8^9` XSS`_Pn*茼XgYx``0L;ƱcR9 ' ɜ<:>(@Er2F34퓌)v|ƔY$_Cdm˛v> A:1l2G2vc}WG_i.Uh8zw+1}qxw&Ѡ>,5H3huvv65Nc6N%-it_?hO$VaÏPI 2:\$ țN=&}/?Vuyuۜn's?7CVn_==ݳU-pQblTXRMpC8Җ1Uld$ ΥnI")GI$>_&uUx-wyP J*򐝋gP0E/tN"P,NNDP[z?)[Ѩ5::bAHHkwYUJj+;@>Wg9WCþ<\Xa12kSa* 5)86f蛋V{CUR荧4I ::BGeY4V?Y&{t@$%1`zAVvjeJFF)Kj,kbF4C aW˖l]0[YuQ?VFɥ'߳pNꘗ78\*mw~'s23?J&?ܟU-;/;pZ9(F盵aSN+{;? .}D;ZxzqyQup᣸]<X7Sʿ^>qV_¨YkypgN۫ۺ1}{w!1Vb.2evKlxσf=r]3ړztkE|^WUף4H?aY/fmwKSeP|"*ɟAFTa WO>eb}tM|Q|L|FRԵ,buI d-O1\ #k+%PRFK{:i}8ym(e/0i" Z 0koSQfRPCI^_K_xbǾ\]`)G;_FuI?{Fr rr!R}3fd`A^0J"$eYOpHDJ6Y}ꯪJJZ_RK˗?%xpEy-ք3wI\$FƆT0bM)T]O 8~V:?gK1tz Q:k%P 1PFsfcс>i J u=l` 6rr>?RƻwQ":sl<B,H6.r)Wx7]7Dqv]TUV\m9h7|-z!e$dH 7lZ8A.eo)\)<1\N[PCz)e#jI %&qUiܙ!ʼn H<|s-7F+BZ9 | oPgwNƚo|F׷n΁Q3x~.Zuhw BzfqW bq=]Tot`$bWDsRB]"^:|-;Q&_v|E<b';j,'++|BLzR+D(v(k3g[ڝnoQ4\5gkHpl #Ko\  t2l$JQW$Y(ך!|2 u}ZmH_u/!hX8X2T/ѯNM.NM-^&x[Scke7ΐO4bH]@pBR-ZUsųǓړSKˣQ P ,D}Y-4;InQZQ8U ů?T0P8T*6x\JK-9Q9=rvD5}5[8P[ ~q4p$W\8viE(T1HL451sD Dy &g F8 #Ɔs`ߪ[0mϧRMѠpfv?+ n՘6@pu[[Hx;ǫ"vofF1tu Nz<4Z^KZ=;x;6ˡebb3M:SԳ.)cz l]?.w]SpvfI[زpW~(P^kz˧h|ϏG鰤w!\g>]YHԿbM-Ps-PܵyNeÕrm|rdw|r% FR|hm7.wdM{za5`RB+n cAXAh`agKLJCz Mc8┮Z=Z[?dN:un95k74bIl0}=dz`ӫ8ⷅP[*R3%?%;\X}p.S6٠?fghKI(# s޹g@kf;pD9(dzWwƹPi}$!C(onyEjŢ4S>h9|N5ie 9oq:EG~  hp/%oc:JZ=8L 0V@nY5<eDr@ӊ Tl&UxO/`%GN5m7 5_kXOi>x";iREI:'<(NdMֲ G%wJ]fF3somsh0plX(g?kDd p+ 9CFKċmTFcuź>Y_O(6VWru=AդE(޴A\xSr6\pas`J9.\ЎbeD~k X 霧(% py_J N#$"oCOB,y !>jN싟E/ia"DE๦VSA2"dtp*D ZF5ÖLt;_xU/a8Fff$8yP=I T^Hs ,HM!MCYΙ[7 {[Nd%zn]wt+ M];0V7Y,jd>9QP(DѨB #=E)O;joRwR-'$A]YD]z V1{ƒs\=/qpeL%Ũ+-td# c$lD:υFЫ2FGaNpgu""0 >yc9P^AJ_Y|HW~QHqnۻؑ6?Ozyu^Z6 |^"y K֞$E&b;M*R\S"12M=;_u`˪*K7JOf JJSV*FY d(d~E= 7!OA葵MBJWRAj IK ɼ&EZݎ|u4ixʹX,Ց%P p-.N tqn61O? ̣C[7!' A]/S45CVMκZ5NۇKuw$rq[ۥoyS&X2z:a"ԔmH"w' sH=>(/Po,ޘub~&J]J$ ߹~$c<*.cOJ/i_Ԓ%($SVY}t^jG?4HĤ_N:sHZ5aziMp6Qz$/O xz<[ 9ֺvaf0?c}tX6_Nj/X]8}f~?COe8-7Ѓ^ك4AZ-EN>!=`Qq|4 /ԯ:~_Ss$*+ V 䌖D09`rs>at.]\*iO7(BAC u mԂAeò#PYCC$&4$:haZLua ji[~6L 1s"tl` 5)aJT9cw$4t :> N90HIQ( i:>T 0YCR1|~kԲXt`rT' =iTITGlD3"b4,xSֈfה}un}q `P[TޥŒ r ִRIۦ 4h # 9;=6 ͕}~؀Aʼnn/G0ޢIPaAc+ys`RKl3Ui*iAh2yÎcl a7,'2SXm"?'s %X"O"Ci$, {[yrΰe6V~r w|^ًgFK8.艕L@0 du:/MHj}CELz Riru0¡Fm6jt HbL81m(%gat|*Ob`f0mOaNDa|YKYފ=E%Cv+ VlXY S3 Q+l؇|/2ALB:!ڟPIRbNE"4P5!{EǁZD5Ŝ3(`4%ў{.j鍰Ij[|\LvFcZ·hK\u07|ek;=yAUMm,:гe :#Б8MqZKdIJ:̫/ȕ" )I9ϮCF&cK`}2M݃mzk iClyP,.Fy4TkU(Wҫ, O_݄cz:G6 rrE0S|b>:`ik->n7ǟƓqG*,߷e. \hZ~Vmka# -J%'hH+md(f!ͯ&E'y]W;:U( 1T f0QכB]2C!$atp\8BIL5| 뜲[ nV<kZWYF3J8g8}ח]!f~^=?xN*މӠIqW+ݪI=zUK9 26 椄JH({0w;vwܼ%OAwDȤ#1FbhGb6h}ЂR Ok@*zj"2.DmiSጣ:e\x4-PH޵gt e!V$ OܧG}y}=LfWALWOWs&HrAٯMѤ~R_'i9%:KxfZN)G%B#x+`WZq zPswʱql+*"/j*GseXJVQ9(#t93jd>HSJo fi=> 4A7ƦpYn4c-!Z i(Z#TL(!姁gxދb&D"$ jd&!ep©p_Hr(`*Hy*DU 5^)DڜI,;KwEV zk }jso9`,ϳX:R7!AN%IHƦ<>5y PL7W=D3^YQJ%`l6;0nL?֞K].>Kե{uuv4qpӡQ '9!'#+](8f B-'à,\Ӫ^Qzyx];_gK5s1\,NC";!J9/>9‰#H.9A~B% nTP:uVVJ@f(a}Iڬ0wa:3.7Yk;S>Yg}'mw=:R_{)W[}~R! IgWH]A@^D;+I S_C(^"xQJF)y0hEm%*@6X҄nOto/+b/K:qT&c// ߾j>}{>!y)-pDdTIȔq."9!Q;! |o"EooaU;:Ex݇ke]c:moNoG7˒ûy^= Gi^´X`5^ gdŴ E*O+`zVӳӲ#j&'G- R06 $C4m|H YTA A770!2YjzdJ2./+~X]t՗'_;PhI$ O;3gkZ1b J24!t} QI0{6uYr/SǗ_ȗILJn;#~&FL|ز gZ8z\I:}N-UtvZCv^gP~ּ$ъ $Y|:b76 ?fnU]`뛭!|jm,}\ll~3Kޞzl.3S[,Cdh>1Ec!ý&k˘3C-fyN,_󘹈q8i$b=z#tFg1ǐ*fY@ '#FhY7ič1S2>XH:ind"ILZn|^5q|>⺻&K:,j3_vzoFbݧ،ܱ 'eLurϔ*A9&09XT7?> = + xj K4A/Mld&kvm>[cĆ;vUðߴɤ߆ꏵzSM^_s+s^pLnƗ"<3nuFΝ7&)Jqջ~퇂96I蟚ā7V-Cy/xjO%դNr a]R,T›dךe6fyIͬ fVK)} u.q6Ӭ5nfיeVtQhQM` u 3 BdךN"`6{D,Y(©Z݁E*;DI`lg(6(PlVn;&)՜ tPlCCpEs*⪝"2ZEʹJj\*vfwઈkͮIv*Rn%•-U78..2,] Zҗ{lɫ5I?6s .2,c/]NCKꪧ<<,SAY7ҔY(vǯ/ Rl"B$m'EJI$ [9/ĉZ5xݒYr .)Z&ƨkr8w3pqIMv ^$FKKrn9jvS?6vΎ*ւ+U4NRKi҉Xr6kͥWR'b2N8 :%fEeA*+!K $0g;'ntLBBhXllk,Ys[T9EPB/G M^rrgJu\{weΨ'.DǓeoUtgSYzf&B>g)-DXJѲ͑KB:}0C .fGY1C`9@0X21Y YDrdIKU,()nh+>.#c Lx0ZEo("뛲ՒZ &'mt,_b?Ҵ! >1p&xB Ӽ;Z\cQI <ΐ5.+iAX\2BDPeI`SP4'Ͳa&!'Mc%hk" jΈ= ̀q*$p:f#&Lu:l |[JA+ߨoSll\_yFK;\4\f, >o+J%4јxvkKB~|&&mMrPJL$o`Ki0FK;q @ǃʢ7{Pً2[upFxEL0蝆['`_xP%ȅݒM> ٣ F JǨ][Yo]RR.(,W (1kZA9;Nˇ)3,+R̵Ĵ~f*Cb9nI"|jD ̧Ȭ&;Q貊88SJ]mkK16K&FL ,Wzp%ChKXg!CPm]X! fQA]8yX3NhLJ7gzpI_EQ @%*`uQXkʨyq8O4zKg4vKD i-cZ2賱: x$FJnOs%YR<(h7?sD8 ^0,F-tR <i*Cʒ[2 tZw 2q5d\>-GLZgI;r._ng9ӕݦTpqw*Ώj{oF(?n{\K~[wnj~_sy{ۚ[h9\z&OK[]Uޡዸjg*Wd)zWdYȚKD4%Md*('}ђ Vuk kIsW OߛtYWϏmmaGY/b @e*B<1$MY`<:3h]k9CA&3x.TDDj,E-a"#LGQC!AǻAXOka>|9sU݁ fīW\C%%6?P~J^`*NT +&Sr6G: ʢ-I4Ydޢ҉=DdDF(d,rn&@N0%2z .% | )" "G tkN46՞SB` OXƘՐu\v{{؁oW*>ZYpkf!Zg󫚦_;~8LdmYd5v.T?)A_!N+j]ֻtNkeݳYZiOv~E8ZRn_~}d~6j46Ձ_czG];\ մodW m^IIBٷ蛿o.Ç nk[z|>D,ϊŊ+əX"D=?JDs_ M' M'5)tfY0JJȃ h!vQh6Jomjq6X<ίVvI]wJR'ƼomS| z,Yr*gt^* Mщ 9 5@?jh"bb^C[XfF^hn|*j3`Tߐ+^g*0pIQV%I_ZEcW9 9;ƃ)3JV #G?j(,͒|$xv^uMA-?OWc<ӛ\ė?|a 0A!D$񴿤 'aJHJ*4"FWz:UϨyޚ\J`cViZ "7(X6J >z4!:)t#fŜr!I|նpf;5]'yuM4]$ERM''mw-=b;Ei1\u(;l>8vB)#H"ƅX#uhm~pl]O>^0rgBgkl,,YMg7pFZqy62-g&ʌIJǜ $0O^?"W-*85۵!.'gLje6mތT/=D*4ʑHt*ٖφ|: C4c,[S;_gWiC3({큮PJ(fGݬ{, /KT8~.3wSƭhf `M>$yI#zeZb- r!  #smhua;7uTL^tP=a6}Ы;مӅ2O hMP5!k{}*$m{W,b vOl2t+ae#5Ɋ+1V/o\阘%yB{bl=pJGbrӂǤm&M2:&WjeT;b&Ole#&mBj9۞+OaOg CH3lYv:)g|f2ƃXlNdt\a4ԄPr<8= (a@>}2\yD8!He?t*%t PY-# *LcrͨQzEC>zL7I`qzĮgVz9˼?<uꀸu7\⁣TJ[|]`,pJ.…"J)-2j1Vy2 UWRF*XQ%%/ȵ8,i-px;xΣW<:mo:S:BWTY.6l˿P*e%2m+PTJ~KNJ=J|ګ؋WާZs #7(4qk4e1p)-NxeN\7Qz#Yǝr~Pv Ɨ/}x䟾>E)=HK_#r0 q!dԀ>q %9 7 Eooa^v4u-#&/>V=l Og oԝӆXIu]\/ nQQ6[WUPv-p+c]tyV/)Y)VyU"$;.3@ ˕+4IUٛr,;=}츽3wz,ީeuol=ZCēeԁYr^{a|1 Ն>&HlvԎZVMK>tڪNJb} >FJ *lOf"v=d{2BB ۓ3mv d?=Xz<ƪlqZK\iN>OЈIzTR(]~-cMdJNv*k%v-cT#mjXF To-)Rܢ Jmrc([;=:;A_ͬ}oMU7P{?on&η2[c>iFs7$uږߪ-6u"ByDsD(Ү 5rٴ{LK.bϚwzd+v%jcN&!aޡ<+.Vk q&!*gk˵W6zG/&h  #A2}! ^B#R þ5٠A*`'^oV: [֤)rˬRJPt\ iљ "G tk䖦 | eh%K9+'gdgePRHHLehOh w,=BԊ63-c\I3A̬v47s-X.ȴFϞ# $̲\YotR2DC-猁`hԆFRr IkJC~!BӮRcAi4Yh9)񰖁3N"6`YgFcN/q)]/0!3kl}4C$e!sCwl;xalF#)7ܢn}eX=,?8y]=D lcׂ%a1-7W4T dž dUm8cph\U&@SAyG a^X@hz=Z24DaLĴJaeR,$JKR@^b'(! _15lm,ˀ8v V"YԏoT?hZjdԶMVf/kLPݩyaM0 vmnmκ|bwq.D9'3S`} \{#AxtC.?b( 7DR Ď azzj'H SAʀvw 0Fr 9vXzT12KҠ'ש8 E /|cme3k9=WZXn--yH,T2 "?|kY ?j~|~7ny\l* > 8}.-H>.*e$J<>>'c/^&];m'-_ɻL}N[=?zL\Ys9Wm߯okO)0r<{J[)CY2$u2tPs$j0ň.sSfP)\Si 5g V|oWm0D N3p\^L-sow%:?-gqmmZU20ǔaYmRT9'U5嫖G,ih$[-N–bi}`9@ /aR tO FwDGm% rT0x^8-,ڵIgpc Ķ$Z"w_p~?@@V4!j1|Mmi&K`r٩[y?&p"W1>8H?G#x#9P|h6:VrJ+k9;mVsV{NZP8@8L=z߾]~/y%Onp_vg0mEۯ<՝k2_f}H0[{Q=#cdJct_Sl_OlkWS2:KfCʠKfJfHϚXc(oh׻?-ڪNlL*jc>3}lk&AR&&:1t ͪ҃kO4"H5ղ  gݭ?&X:>  ,y:C:?6p4|BR&> ̴|p|໽>+M[cS-FVDWs=֥5PMF 5#5jT|}bF;-Iw_{ (P5}icGN NUi)9+( {0@R_D+~_+4cH&hbiJ dRqcW0%XS!$ bLup!3p6TU95tLɘedzwZY1P~&?8wLVJdUc:>siglǥ֑lDp b}˺ ^jѡ?ɟ|juG2/HGGQݬYE>"T/5}<+.a^W/J=mE^q9x><~юѫJTtfZMNTmF@i+n-ʹzF4D|$V2FiSvq>ONd4m&/Û/-t"NH1_^->N#N70JPx}3]~`qonzw/ >SRz+qeG˷6ckX$:X??뽂mHJ;Ns1q4{02OVGWxyW9w׹g Ƈ[ˆh"',>_~ nP'E.?n.\m>wP?՟V7?{ͷ?&y785FPx1'Bx|r뿼~[G3ָխKuԗ}G'c}]6NdfU\$^\iNղ?~57\#&KW 6x/\\UR#ɅDZ6b[?2^ϭ O]RjjM~}&zc>o[$&I'=T5FpdI6Ҝ&IjÕ[LOq|HNG1TJKkh|^5Ϸ_Qku sX o=9 zp~cp)*syz~7$˫~_bsY_OsiּH3}gV)N4K䵔 ˌ>MŁ;xsln>_xh~ywL4״ᦏS>l(Zbolf' ̴/IPHo`5{=g;RPlkB?G-pY)_KTIYSCR"8 6ZB2%Ӱ4PZOZP ,R WkzzPg~h#9=דԭZ)~T&TCx{']TѡhG#" lC8MrDc;=xbhM޿L9=tpjkEZ䚦ܔ)&Zkۧ"T`̘.]4̀ME²-lze*ILZL5xy"ܡLG}uPF1JIϙgpK'+NY2d*Clt)]8`<"cᆉU9k1tN=d +LQuQN!yh!qxw5fjYAtڟ4߮V3k(:Y-/dc)7߲F BnާZuPtfέ4j%ul+?{gG"{69iƶ/60FRlP((X{$zd=~lt)dW! QS践!,;v>F*u8pCۧY A\B}2Q DB8J&:@@=V+dܣb7w$ՠސwVrEθA#MABX pH  . LhP׽E"5g A5 ?2 kG !.)a04KI0ά ::k5ִqlL:v_ dUPb:!A1nqj` ~h5zpoQRcJuڥ9^5 5$DheQ"ZCu/ r s7C$*AVQ=b+ZCk>8a4iY hvkѭG{3v)c֩x|1flv!؜NX)j{ 7ݶ VЙt64IxdF+ ׀̬$޲[6pҫ*^"VD74mU(184/^%E S10A/lmRhàmryyA;o6(3IVS n=CnNN›ih$(ll% fH?(w%[Y}ˡ2#bpaQ"DluHmk"#* F mms$zpwgDr A2#!5xF-?A( =ĖZd}c%Ö+^B|jc+ AISL1. [,BM aLo( Lj6F煎<|:T{_c:5 =&:Ƣ$D$=- , V5v@k] kmxz@mЀ b,+aN3@b,c X 1@b,c X 1@b,c X 1@b,c X 1@b,c X 1@b,c ^@A{%,@no ra@ D6p,ЗT>ao{zkZM@*ٖ'*~2)*['<+@m2 H{v lEȚ7 >z >pGOCdi$*cC)#Raٱ NlRf:_~fk:?¼]p! `qi^xvxU%6kur!"]8񺀏ߠMR)^{H^a"mRjb3H@dW'q 5W:X秧} qٛ]_0Vhb&Ll+0JL`&0 &L0a L`&0 &L0a L`&0 &L0a L`&0 &L0a L`&0 &L0a L`&0 &LW )d'|i⺔ǀmU$SUNMsJX#7H$hD{KS/W*^<|^ͧRS迖Udz*I}72-SU6( DлLgFy q"y26<} „ v/93be;)KT/}b:q6ĠD7uT&# Ncרe c^LjREY=C/lޓiw 淞BL6g[ M-I/ǫtm柵0Ыߥŧ5s 6FT>Ì_>\:*%$^[q_=?VWVCoN|FLX bMGh+)-:k-`/NOڵʚMWuS٘^}lojy&MLq6ǩ+zgT-T}zkWz Z+5dI6V~F2Șd9VStҚ쟐׽[/ 7cҝw}7z=0L|ȻmJ"~};}Π{6{2-)k&'me&Nٗ>!g_\wND'n1E%lz %!24_BG,aRԆ qx~8ēq|0w<լnՉ//-.vlkmY!0[ 8)WEˤ|,DF' ę"rB^CZ kG%d[ ^{o 㳹ŔN0/|2'+i@VSMD7dSNYM&rVYf5D; Ud b.9! ;b=TJ*cm0䐜o^ F#Ѱ<.~BBZJ[t<{D+]8/]`?fGϯTpdJGFo^u>d?#m 3ZB:h̶ q'ġ@&^k,{PtJ'R&BdY55&FSJGt q8ˁu_ak%SVS>^sڛ:q7o^\?;pA5`$Ezw|ϼq)] 킎~\観DBh鮟o)4޲D?]=]]#7ft?9gs!p崶omzoF5p+]wOWq<75/7wB}[1}?̸e^j1_|W#_ށ̍sW͕ԏsc Hk[)h?t"r wRTo%~]msH6JN>%N/jWxc6+mϨi^?[̾47RUTgbORG-t J)dg2J(ԽzF7R gǍ ~qrz~}MGHЎ 8sxҷǾzSYǠdLxa{F2TYQ&!e\L[\{]䐵f{dk3>$Ǡ1?{WƑ@6w_=' 8eѯ#䒔eVϐ)i!5HĖ8~+ !sl ½!j b'6!JLPh]0fx;杇JLw-6Fgpݕy/S`qf#Q5aI Mbm<ȩ3n$ <h2;z'#Ϸ<80Zc8N!FJE43j0fw< Or׍$?CFG୞a$8sXH,8=X:a!7*"Ɖ8fgoֳ7k`m. =%FNQב]>-"#K]x2M 4oƥA:RDH+)*# k0Qa0Dt+udD;j 4EQjU$,S8":\1}?\. w񇑟>HsxZ8&:sBdԧp ̥&ISJmۤljN|RZ':R7Wj҇[dSZF*Oh]d}w8E,oM 4Oa@eOD2z%3v.g-Py6ZiUѠ!F0FWЉڛC=60I-TRby;ra~4xMmGN&~X~nAXB hrtۺPUC خj.i84~M65KXxLsgs%ݪz+k}y5Ϧ;i8+~%ww\A'](x2w^&߸c5ދjU ނY`וi&":07-Ml;ݮ&!X .rl'8tۈSJl҂zmb 2օ"Yix(E;Y'=;v c 3pEJF +9b5\h%Gg&h:.0?:[L7VbQ{et=V0ϝ@)1'<|<#0bdg9t΃( % Ƙcl2/`"Y[FHhN)bxЄL|mFicփZW\m(:;ug7t n› C!fH ^ wE%@Uћ4 l}#ʇ%ݷy(n1 m(>b߇7:.o W8߽;7}x1]$CIwl ޯ6`1a}ü ;7wqt-BbzⰁitGRK|C%8Bx` Zʳ?kZU!,M"0))y 4Stym5b:HM8Ռ 9b .jN><`ENEg΢O6#Yfv,NkQ{JɘgH#x9LN:*xGOWtܫ 0tjo|[rJnҔnײI~ygAyH\ xuhb2շ SY;x1SDEJ)_~9M1ȍI# 2Ҏxwx!7;ٵ~ךbI)^T*Q;]^H^'J3x!%ӆ&"_Zי~*?td 3ʲey6+7;vvvk7fD\"%O&vHG>`EHG^ %'6 S'ݕQp,M'hb[|p&H E.(qCvYroDuv&&ZzPҙlYEJl<{šB_?mNQm{.#Mz)sBGaw.*4K}.$:3\H8 V"P2zE@Op*壶ěUG3FyR^ ;ӌmwI1? i5u'3*]txQEd8:(sQ^R}V rju<}\sGgDh郁$.ׇW -޵χ$$ \qWI`y8pP `*II2\E e+bt(p2@J!p7W*-UWHUR WoVALڪz~3^d.VjC#?51}@ȼ<&t,ATR#{pcz.z~@b?(~dY~%PQib%%BT%(DD H`ZDAGJY-b T>1]Ns@{h_sƓ_Frj02¬wG`/ffg3 Gs$xf9|,erY2Yf9iPA)"8,{ߢYΞyg,e3y2X#_u\M\hIlb=$[ ٤LT eEs6i SBA)U!H-RDY6FJVIQR2{%Wt>(jj6*41dEH@)9IE$s2JaQiƝVc  B' L~mcmŃ5a'V13p}CT<ü N0 NòOo.kKMMm h˜"#@"i,8n\4'RYɑs@!ukH[p/"2P4:]@, SOӊ iT5giwV5NƅNìgg493Hks'1ڃ}V),TU ^ŹzxgAHakd {z"HiڇgXA  ct|&a64\txy_[ VE-6`MH`uH 870n}Z}N1r|a`UE)Ʀ굝.~WIov9~Tb@uV7L]EMh6 yߺQ5T==-1¯Ro ?xF0RK?y=Tߎ"Q1 v5V߃-w j{l4wYc7?]Yo#9+~kza4 1 zmbRݿY\=;|~O}&}7tÿGzHs5h&7ǟ>Q?>ۂP Gb6)Ӛ)c@kdLdm2,D:# ^˃չ=m&NF*x\, /w`V -NmH0b̓i\܁'kff'n7ӘQѠgdϡjf%A|oL Z*Tn~hӜy/En_̆5>xT*O!WJY9ipa˝G}iEcTP6y7T$#h(جo5ELhR /M<)]f;O8 x ݊c\ȗ m.2{i/=-cn `yALWOWLǫ˗yA%(++"@Ոb O?PhG[P! ΆR ՈR XLB,Xa$A@ýa T,kBBB5Br)\ІH͹ҢFS1AxKg0P+hyA72iPX]<| *P).^Lכ8c PNqyqmZ wl^sy4:jۑyje_, -?[ך}ޞP a+'R ӠpøDYdQS$T,"RUBg'L`s*u)xazUfHkوZ0:z.|BID\U'eܙ!D$t,7F+B`+ v}\)dy 0-lswʹC'|NP8Q B|:p0A TB*$TF ̊h,z^N^ Td6cZRT#|;x)W[?L @p޺hy8)+z(*|NK9MR28",Gc @#V;iX.o5;I.sJ$.}b"|zL(IaMb%zV;OLϫu5}A8EQщseTpƷ/*:cK.GXp4~i-J*9LswGieT!m[ػsՌ^m_M7P솦.Oa\~dWϬC8`87odŴX*gHŸNzZM՜VQ)Yͤ 񤅒?V gd6!BVVBPƹIBcP IQ  q40*zuH;^peNJ4-y3+<$w`}N}egVů ď?TCETiD^UV^_ *7ٺp6g˷bxށ*%;65>;|a9s_9格!Ye|ʱhA|*`N\IPJHԂe^{iYþ]S`>@1n?n#.nOxܱN:vhFMHbRp u)BBQBJ$Qm%jGy-7{ ӄDtuqɔLb2X _0jcReHP$a%v ߹jyȯ pq5Zrdف-k(߮4Lū<ޒRW(D7pTb PpˣaOֹ^<&UI~e`V51D VJΓ% IJBD}Q U5Y-Agż :"rLhs[njo;y_iˡ⎤w$LD؂Tdm$DP4+MedCfİ{áQC8=7 jy찭Շ[f?ХY.t.ڣ6 ֥Ex( ?Z_p+]݉tA=?~mdt] BES \VPݍȗ r 5uDHfGrL/ n&m&{2A";dX6{\yxˤѡ1|IbS:fL0HC 9clWaw@DsQp٤%J#mSd*YO坲 @p^svf9zv(3f FOq-qFHL6:w3 i@Fl4ZzWDu53_1Kla%-ÏJo'uq˪W灢3EJp\mUJl)|u@ %sJ}))8QXDOzqw qvI/cegMWZe#xT] r&' uQͰ%+[%nB).?0yXg_0V)+ <*cP')u< ^y) 6$ޗz,@]ûY.JѠ7ߑ}יb;XOݭitW*rRIWK6k%]?{Wȍ/ݶR|`?&s \dn૭,9dq-[Җi[w1Vu,>|,Vr.٬>%q3 ) \C%S_EȥLKpxU/5v.ɚI6-DV C>8 Dǟ`:7|ê(ï =5*'[2X|8Jjs8[k\pPf0ǍeОVSα?S#PcЀgQ\4~1J <SOi [:zݕ۷l|m&xD > {ib~Y T_)=qp'AD*ιYRl !'YZBb`{`A KH!UpJL09SF ۏҦy;&(As8Us!%Z3+a`œ S ;N橐*0ÉkLeqUtT%BSPat,fk!l-CZZ~'-n-zhƌ_~rY_1ѶQsƂx*6ǃ:@6Pcϓ7߮c~М.sڳ.\=@܆.+ЗuBq$K0wGM4vQ|.Tbj{?_4d7Np>̱ 1$0|_`>Chun/p ZA7EElwٻ9Oqjqp4VڽVjd;jY7nypLPQa:Uќem5:UC-9Q=U{TM(%>"B:zͨ"GcvpgJj pL16cp,p*k{zp4\š飁+Tט+gW%7 W_փcӗdr5懿w[ܔ%`\SXՉUf6`O*0W6P1a4pG5ύq-]|g?)A`qDJ'T2I5NH+ROȃ''_'_X|}n6ѯr Blp3_ 5ܵ~Ɩ8\MI=> >K(q :Rg-ךROmZ 1AxGU:6ނ~l \bI/8|htauR`5w_S4|On\eUwgg+'8caoj̷G_.yͨpάƷt w,NKiukFiJ2(gk[>7WJń1D "\Ǚ$%LOшS'P6i :UrY[\iet\NO"eo_ܝ5uކLˏ\B7 sVX슧+7\0x> LφWl=ʎ_ⴂX\jӚW`u" 'L^z5*%T8P/xoZ"Gj FMhJ+Jq.@N%eL*$ <#:j>!WS!#<&FMW66ikj4_~< ?{9Xk~+Qlp)TCETiD^UV$R'?mi\;{SW_vӽl1jV> pGݦpV!33v!]\kmA]=%6Oql&BK:~ҳST9ыy`An57R;vY} Endŧ Xߥneqm+ӳCp{fU_ju&ci.xtju]Yp\_ 落eϭ_7T.%xw%# x K"ǩI0+a}sy=+w8{lFqϴCpT}P~gYe|MG( Z%su⒀NRBԂe^{i٠Vm>?LBtŽ.DbVSƹjR :%ѿ ~a:HIdsjH#;pporpҪpA&IN%6Vb~>~3*?'uM~^ 4hKn?Z|Cfi:O.ٽТq9t~,†Wg3Dp&8M.sT dCz|ӿ|Yjlx>;q+HM{F99q ->-B\~ -Q7< ml'K:- 7$=^~=>qr=E74Yr9).(![z® ZʽؒHvOuw]ܒ0iހ$XR+mx'VVyJjw+clȊ亦QCPw6Q vR?;*߱ hW.DĺhP (WwXJ& 6C$[>BҎvXxHeBc{YtI:eӅ$>sKp5ޒۤϬR{irqoCuZ0|&t:^L yGtdth);?(!oᾜz}ṻG=lOw. N9=AiMtL%k򾰱AfSyD8/r9;@Ozv(^-qX#b$^&`\QJ4ZB ^Dor6KS=l>W8z"ۉ-f+;NJ/rDg'. 8=G)xOpZ .~DUz†M^Tu,Uzzެ%ȾJҫ^*O{_]8V bJ _9zy+iB`Ě$R}#ag{l J !|+kj5(#BF܁ dCj de* Q>.bu$`4y$PS4%G${&ؐx_%ug(E꾂N7)Vն-Vl*|.0`8ЕIީIAJ UQTɩ@RNfnzSM8)2$HҞJ-+,jhdJ,h'W+RZACO8͔kK4/M@/~ͺ|(&\yR64ǎhV˛+:*0W6P1a4pGu:"D? d>ֹ.YM  +&RȕR}^WX&p'r^ Eh B e0X>zkW8RWO34qHPR%A$:k;JRXp<'7*X N8CBe#e1Q3 E4e')GFO͉"=Qd,UbzL!=cwSJ}&[i*zT'EQyPR{ih{4FDHeP*8hatYdZsD C^'!ǂA#*IeN+&XLXb/X(z,|7יo8o>X/weoh4 g9O 2.j!?I:[B *Hd!%wZWC ٳ9YM.ϧ+̄|umc"^Gٍn2K.桠v1e=}YD.qKLlmtlŇ=>ZCžqY'qڍv\ n~ܓ@핶yˣۿux:=&Z6JYht !(O8iwHQ·Ӄ(T5'X.xT|;>DPk⻋0/MRy0mf?&s-x+'0<œUr ('B0mH[fA VԚ)c$p$sDv Yi*D&mh 9Fkp\JؗVGE.}lؼ^~Zm."+M{1:u-hp]~r֔3n,$2X:&t18 h#E⟚s(`+c/,*!T|..o*x% K0xƧmWDbU&~=ׂ4$%7Ú'm|6\Lή =)s|>=V9\9F/uQɦRꪙ77q1Gr..}4ȴqDG7(=ݏr1[]8tt~}?ɷ߿; Nϛwo/qQ5Aχ3+o~\h`m-Ƶtu7VPÕY+evKn58_xInt6OHa9!Q!>|zT(n6)tV:Y &h/ryU?CcFjdl!8~6{ȶ8 !9ˆ!Fۼ!v 2= m-ś9Xסy%ݹl< ́)P23] P̉sKKL!XsdXV"6Ig8*^3D&&Ĭ2@mwKytueY?]_<"C-t{>j~f|PF iJ)~*XDV280IIˌΣ>2y#1  y ]a!-X\.gZ3&'U퐐yz?(g} I/.fI \:h8ufb,YR̯In̓Zf_ X5 _F#_W黫M1K?b_^?T;N?tb3!VWFEqmlݻ(M]_oϓP.vo򿟝]k`_+Du=!m, iI$-Ӌ*-4ϐVtB$/)OGuђVe"S:L9뫒 \T%Is郕Z) *Ngb>!ƀ) V;YF5;eR;v 0wf[XQoOToTH_^U}m ms$,uJJ!0(%c'5s;:':~Nv[R K }BLyR5K<Rf("F Uy‚ѭd Fx ѨD2XグB (+2O-ﻶg}ך;:KuC>Nۏ a8ۗu?mU{`javU=ɫ{?`ٞ%nWž%n~%nҬ:.tP"P"%n(`Gtotp }VUF@WHW-u U+d_*դtQ0 ̍n`pm/\Ν7+o.P}-< bV?5QۏwdI-(10.o޺ނTh3&{C`}vqO% R!Ҵ$Z#B #UfJ2ZúNWP:ҕT]!`)cp ]eFvl+lEDWJ f7[ fb"]BI*7tl1dt() ,Q3\BWVt_(`z芯|Dj3왮6 dtZCDWjۮ̘ nrmUkL_ R]rŇoá+9(#BFGp ]1.]NW%g] ]B%Wv;xDЪ=vzsM#`MYoh:e/4ъ+M"M+Wm^g/ۋwec1.ɫIaC+EIl3s*&y5obXJBxQ.?kee7Z'sj { ~e27'8e3X|SL??}S{bND_)[?߽^+zYaE/xѝ$~Vw:LV&K=N,+}Cge'%!Fh)J + r>?zb#`yoT Wh5a]W3J6XBQŖB0"zDW2 zcD(t2J:!E6KWt*[LV:]e tutKČlQ7tjh:]i@ DW %p ]ey(rte$'S{*U/tBmWʹNWbͮ#yةFܷfpOnHF(fJl@Wbm*I]!`Jyo*/t*u( tutP.{DW0@o 2BBW-#]re pW":t*ſ9Aǂ& o4Gc .@_Xj%U>pdUFgZxȑ_,&. LX`7A-Dbg)h) ؒJx䮲9 e4]eiѯd)Y|JQ#E HsUӦ,F+(&A1 ťD4]eiѓ,lKtW AyUXƸ,iL DR6%+`F&e4]eqicr\R6qWjϡW]©Ͽ_ :L\g=&-aRR+պ=RR w֦1* ) URrh tW?&0Ƹ+W4]ei9R[uWO8 >*鉠]8cPh`Zyi<;JQz@Tf)i *_6Xֺ6ZlQr"d?\=0xcC㸷gQip鑖^ة?[mj6R^CGHGZ6JY0!zJܟϪUIKŬjl0~qkS\S|wY\˖(n6녦N6<|oXDssZ>Isz֙ K;I׵$bN'WF,w K)+q'Cl-vC |[]ޛW=?^O{e@EƙREN\Ҍ*E6 \EpH[fP1ÖBxF8I,BEq)@/2Љ;ϥaRkjcљ&3ǹ*D&AS\@SĦ0k}fqRq?5ud 14|*NП<n>;YY~u6"&[]Ng%h0;<_Nϖ\Y/pjŞ6M"k&wP(>ߺK; FòQ,ül.R8X$9#u4iMlARhL"Ĉ\A``k ?M _C3>m4^<$g=֒'/aZxV$B|:/hѢ!De-BCT(T8ͻ`p\ m?ѣjCBJJgBRBߘ<#b^ "6V,wTS> oq뙥&$cG/h$Z T!X=C;PIn3QN# IH+N9 bH]S<1 J-e_hyIY^LarvqSjd]\X8~;N/&ؒh2=QBvߚvT.&[* Nr9ΙO^Yn*Mb]A|4Ўa'C`]j:a`{Ag#}N& ):n@1mVV%ZOHt6L'jKq[|>,o#bcGS\҉Oyy:9M/z UGt0*l%M?f8OW0* ѦӝΥe/Wu`vS%ARs?LYշV}$ G_gىO_-;w?BvT{Zk:U ];YWh!G̽gppɪyz[1r{FuW[5Sz5xg㸰\Dy4.Mbg:mhvr+ByPm*ԉT_:ǟ^O|~z2W8{`=N ]"臋0F}UWQ5ַ}uz{ 6yG7̇+K2)4 ?t~}y5 ?'~^/X.λ :Yq?쮇_Aϗk~^D)ץf>Jb؇3Y\*w,.Z/]M4#KoI{T$K'I^ xR" CI=H93Edh8b|Ӗ7xhBpjG$R4tNXICk}$VP9eu0rc>bzNGPb&ݠoA.F:<ƒ6-"(qs  N"֧$zI;@@ehjz AZK9K6Y/D(!)A w)YӇΡ3" ` *͘-zH|$69xjPB6LcPTO\ 0 X 6@u k=cy6mI`Y<* C˝9h''alH0D@lmll56ClMmA[(`P|p W< 硢ڼk̓_L%c~Ӆ0*d(Qiw8&q 8}⸷5>xpnڭ!=cL{}-)q<%ߣ<%?d٦."x V!I$~`({ͼJ8@ Y@ֻs4 L"&N &t>\H%ʰv+XGjBCb9bR8{αƣ/WG,zz/d3wbܿ qec |l?6;Y|λ{K Ko8N'̓^=ޝRSq2)Hw1rT7Ǥ_?~O[N"{L#4>S 29 P1 e`G}PƄPPЏXP(U6]aM魱\.Lj͘[4䉭{}.q8h4MH^cE39B,wpY+ZfLREr?II>rR{'~ݔ: n#Oj|܏b5޲,϶?@5XڈNvCyɱ%sD >: }{:.F_tqZ| iHLGPB*)x&n-t% EZ~>K^:v5aIڡGnBvr;YyJb'Ίu>ag3!'t;oc'^{=eWcpQ.Y?ox<`\o<~`ԯtj@[k9ٶH뙐HJ!-4h(d!եF:!Iֱ$"R N:(yE]꺩X\4 XFdIL\`V<3rAC'*E#FZUֺq='m|qbt9f8ĺTݼu| I L[,`QA@c"3!  AkΤ3%&vTq<:wϕ̸v71=SF#-ꌖ|܄}*0(*8* hQ4Z-sg4+q[x[]$G֔v_:r@[moHńjЎESޘY\c1ḴXRmw,8bb'IaiIxێ vsz]cvZ\b.W?$ܙ"_GKL&ĩcn@7ՃCETiD^UVTx\JK3VH6L1 bS$F'9Pø%j8_ \@20Np $0%R*Os^5@Q4NScp@ :g(@,J@mrpk1ٳ -IMr1mR~|Zeʖ^Wsj7j 'O>o!4jTjY#nE/Z xJy*{յͪ͊aYgBSUtl]aui}X<m;6kز–v{~'·-$<<rmfyO{5>͜_Sj|"xjz}Bu״m%†[mr6?XE]GxoylsI'R\6[qG%ABe VV@P1ZDӾ[?eQQ;dYe|hMG( Z%W!u⒀NRBԂe^{i>cmzSzq]a?͍Ԯa}yZܯ(}شK4^ǎͨ SgtXB@#EUFFPԺIT[j$5 >åC4!$)uu\2"A2p©%ژeYY`d G%ae\7!m$xq5$܁-^?OoF+1$$&j+$ M"c6(@Xœun$~gqFiyduv@ ;k^C`{ &ver3]sW~q]?s-Y).76\ZtyFV\]Oiq%ޮ׍nsU KTzdgO^[% h_̦^WӛU3FZy-F7{@zۃG! ' PuôA6LI~}hw˴ nVkɕcvl®UƁ }bo˱⁊'W"L J"LFTNJdM"o%h$^s> aL5jˆ:g? joKglr vx!<}yVyP>ByPѤy$Qn%JNMߏdXš$[vϽB DU8F+߇Cu7W'4$}={I<Ƭ^N%u&-yPe"Sɚ|.llT)+QCH0rx=y;us|@8Αg5"Fe ϜTH%E6*g:5^/gO7*odNb{ONbIJ,Ýz Uxf('<\)WS{3W<[O)jqwc% rzH|޹**9Py0c⋷dH-uzKqscRƛ7o#Ї6kM'6DB-ME Θ &a(".meUUr*Sz.|ށk0m(ipSd@I:=  EF5Q42jV%ۂcr/' =$ .mB<`/_?re֛">Eo:  AN0^T@X^DBń&գyZ'NmFDt(ZX:mJX`\1RD]))peeNDNUMAHqŽE.@كQC|BIC%i>J0 Hu y$s"3$DoV #F:*q$HBS"Q9h9RDjop^_vUy!q6O ?M#h W8{ XzeXJ<{fDKM7"B@U cKA FˌRBrhaq1u.$q,4*^V ؄$73ufOW {r ȅ'O5$h}5_͇.1Qշ_V/Ӡ~6]9cs%`{e\.pC~;4T @C.JVK卑kƞE12mr|]A`&l0hF&J}-a k6;em9X>D=M#|IC,jц3I[iаM8d eF&IPQǠ+&Ԁ,\ )yT5!'aiԥCȥE ꨬ{:Fw8p1F##xhCtTHPQk C-XBP%&yОz &%%f3"mpj>,eqy QFIhf%@qIgF x)P;.:{%ȋ#/$Ҥ hFUP@׌S !I(O@l #/‡͎G/|hP+Y7=" q[8ޏ~Tq"` x R3ZaQ*=^`.(\_&OηS7&?ַ݄NWۛNV1A"&i9~GT#'> ԕ!1P-w~zOVvÍǸ V~r?Ycӆݯmқ]Wv߃ŝh^}Je .bw]-TZ8A&|7op)np~ ]g>*]rBWkշϛdy\m!m*wJ=WDI??Uķ ^q6\0S!'U`3_YE\= oNcqꕮ2`UνM[A( C.6L sJ+D+m2J3\"]IHQ UF(#]] ]e7 !Ѯ2\] ]!Z6|c0+mή =y]!Z5|cDw6Ut[ů<;]uήU'!в3vC  @JtuSUK\GPtut(\DW.2\S ]!ZF*db +NrGt @~ø4tu7eJilh9FpE)4J=t( ii0H$DCW.UF+`t(QJ0IIW &5BWuUF)3K+) BL1tb+D൫rOrJqP$Bb*Åb2Sgx;]ej4/47 J2BZc j t7REҕ.LIA,2ZANW4#]2-\.9{9npϭ]uBKٙڻѕ@Wfc \P]!`8{np* dtQ2>\SV]eCW5|tQE'Pq̈́~JIA>B8ptl1.P] Mg`N#M_"MlaPL"ӲDݱ4 c9tFAצFK ZEXx.&6𺃝f,kK TXᄽ pSޯ=$EA&v13\cJ1!'h^.ľ[CswY_|2OTq o@zq{_@w1׸RO޵qc20.#0fv- &8ʒE==lK#my<$x3b_FGxbh#ֿ\^ڭ{}t"_7mȕ>,=v=Εv |G t{. ;.WJGGqûA8.]RkoQ rKuq KQ-Jк|Ʒ_GErh-]Out0JdXp&` W4L *)>-V^PU}+ncz`ˤg.%[ZK;ޛnvCeS)|5ӧkeO&'0K+ؾjYJ)')kM s|'ج/<ިgGW٨\~̟ʬ8qteڳX*ٺ tFɬSZ$N(jlKd⠥Dk$kTAjĺ`DiٸObNh .W9Ѓ̃:*wҲEe|v ,2iJd$8D.חgSA(zU?LQcKh .cX[WYSX5\V膃? n+RW檅k\ mMʰcX|8rq"-[kXG6y:彏v6OWGID1e٤KEku}[eMa:(!A&(I &PTTxEC4F $cY.q)1jYswMqp0)D߫Ǥ:F/2Υ :@VUI`BFcBbki}bY#"K3>6`8rk]^@w>ZvGbjlh.Bk4՚W#vjzcnvYNUSj[ep֌kC5gnG[Y/ a1W[ 7X to hxe|Z (ߚ_ee=L2գtTSj}k=9AE]jxsumV[A_>Znux t\u> s$NjXCpa&<ǿuGzH@]):{FQ+r+&ײ+:&_=cc~x>{FI+ ڰty\|_*pʚL7;Yq)dl,;.4ĔLLuwp1I(0' L=DN8)׎< >%dX]1EtQ7,cm_UCIXa,a%K& F9O$"(,ݓd,2L %ƥRpc p5DLV9:%(!j%CxV'S.yg+$w&$qBް~4k~?;@p{*x$j%aHt91+NJ:yLX9DSCAe*OZ@d24@FP1*/ךбv&͆ 2*Ȣ3 u7,Eܢ |aM \R[ 6u:Avq}1Dw эfx J2^큺l෶V޻8B7W'_ިR'N##&eh Vu,%pRik$"5"2:Ř'I(eLDB1YϢhr;KR]#cg܌*aag+ʎP^p)Ql=fTj*̆o n89bs%FCz0>rK?<~Ȩ#"(9wf O!=`geJ(l 161aAX$zg+qnFl?5/ݙva4CZ1Q-ul2NY$[JѨ/U:ÄC+T CFϐ dсF@b R!F4$# HA;1v&xXL;ҏ}ADu@"^"?C/XRR#6qBFV)"ZC QZ 7)8 Q 2@Qi&2i&4w,3qnF'S5}uv%xФ$HP DUA ~#IL y"1[0miNxx1<=@2+ls]y5z1$83{ښݤQٷH;$k?N&U4c51jQ:`uibTUR(nOzϔ@jZhM&^-ˑ(f'|Tﭮ]Y0+^پ.]1o:oA{=OFclkKZNI`:(c !j,J4JQK@mrǜ ֩O\&2F"G0'L 3x%LZ-]slͥ<#YA(qil'nټ^ܾDsfơL˅e@9Q0&䍌#h➘蒎OsCM"0l]l|MB|%olNyO7qӈZK6T82X)h@q-ypkB⾃Q{VNӘM~}lcUĉZDǀm"*A+/E/$83"*ph0 <8>4Ɛ"f<sKB{&q..mDkˁ5%7)'Bf;eMAQ[6♔rf;?!*o9u_Y7 stX 83_~o ]T*qw78AƳr*]b=A m^qhalEě̱*?+z ȁ)(*XoBz4F*8^5C<Жo8ZB;ȏm`yO-n&M7Ņ>x1:TSs줽b1*9}p[nWc$*F w/P;3Aȶڒ[: ތ[YWWhFp(U|</Z98̮9FoumnfJ&ܰDf%Gq9tFd~G k *fu6xCUFAsL m0m0E C$KAtIF8CJ]kϧnc LtPY/ 5-.N¸W rx޿'2ӊxM5Ԧȑ͂vJxPMl$9eSZ2#h(ML|HE.-L%ʰu'8Á!1qĤ3qnuNCh:pGΧۇ+w2B\vٲ=fr]4g'Mv{36t6Ā҄~*:E&GBYTLbS1GCT6o$xF, qYRelW8VgMO&0Q7G*so x:6df7S4lfHR(I}&p17HP-f&sRFX)R ėFC$I^E9GNdG~ٔK.<ɏn{o<'^>".xۦX܉/vq'⩜(+;ƺ%]q{]ko#7+?)˗b7NdI >mm˒#$KJ-~UQ$E^sy N!YaR 1 (2R6' )X ' m͒O#AR{Ps( QɨSKY_(vFseEHZD.j[8 S1 1Lb&{Zҹc<1B",f )`YJKڰH4pMHRقkM'6DBm0܍  F8Sdivය?`Y؎HsL۰- H2(`i|9vz}d6 75dMe`Ը9vБ"ޡa}̃ڗ2DR "xsgkeL9 NO*\/xG;:P Ona_8?}֯'iݕO?ZsEړz d gpYnGaXr['g N*lum B y1gg6к*k.[y /۹-PVR6zTO/ ·2ļU)3hZ 3l/?*m/au Eigcv|=Ytj2}}s8ka`_KD{A6~x Üd iႝqJIuΐVhaIZbR gDȜC]o uݟXpHkو,I=>XD"*tFL;t:얂OZ`3 f})d~0 a[v{{ Zz-ߓ~n-#XfQPh1d'̎Oq7~T_7'$dғ^d'B1!jAH։@%'H*',ժI@H@7hdqq,<҂F0e3ֺH%:FuC>.=ZAXOlkf9*H .^e*VbԚ#7S듌0/u/3ݽQq6?+_7_>&.  &/P>sw{@&U 4~7~ojs&T:7Oߟ'ߞ^YQ,k;#;\38TBFe` \ sPv^2ǩ&qP FjB!V*w'RL'55y}|ǙLNOǴWѱ3}*l_FU<QJ:qI@'dj@)r.海.j!V1,o^`j6/^}]kE_sy狂2jl9)bXB@#E(##Z(j]H$D:p E0NBe6 u\2"A2p©%9>{ReH*@\CAnMA3CX+.NΤ םQ1xΒj=kY0֝=.Nlڲgd~eX] BXb{5y7u[jyFE2q$$=LDh+@M" 8c(@Xœu<dv:(z:$M9D0Y+D+% AkC-0WTvɓ>7 gbRin)㹝|_}ӥ&o1^|h4ӋlMI"k-3Kuz;PlpبףϽ9f]iy4X v,6st>& K~4(`4O0@u{oGeX&x2UdZ=[ϝvU,K$?w Fy֦Ι>03d1%.\$ a^4MtL%kr 3W)+$/(ѧGf-$ޞPKGf-𞻃`c)8Gٖ81/N0U(x B-!/Q9s:]:׊3ЧbgWO~jƊ^v?tyUz屧-£U8kQʑ듺p/ Ą'>BHࡘMm@P\XdT 5"~#JGj<='mA]PҝAIY} MY [i^R `j7^->OZR1 ( M. Z^8)-:(*RN*-ejQ)0e<kIBt9ze(MS;}Y+2>T<(hS>u,_]V^M٧/y'$Ox\f7-x00A Ȃi :Vߝā0e0*%T8 i0LM\d$@ 'REZq\8scc I2& O5ZkZ*bH; OmֺǭK7@i6G>g/=go%]ۼ vL텘/ڂ>Cݯ~ Z9TDMIիjqͥԉ%= G[҃֠K}H":Ɂ-Amz+\qBcZ".̱c"GScp@ :gh Dy &g F8-#ֺgJ6䶴U ~jR;ɪpZ}pW\pK?vG쎪]xWN75NݵNnZ&*v\Sx=玼tQnэ{?Ѽ@sl6j+F(yK%χxncΏGi_Yu;*6^GgseSs+_Q+{dårǭ!eq2s'WC;8+W ] 霧6yK N#$"aػѥ,|Ԉ`*D4>ZMʈ-w`2a @YF5Ò]#.aW<=rjnV: ggLK23/ QS4%\G%{&ؐxg Z u Fuo`1EZ]_53"󖎻w€iW`^W,*Ч}46 Q@4!HOQScC8=eqT>!z bvEc$,m}8xX'Vy2b `<_*evUjxMs^p4<ëo?}_뇟~Hۯ?'+f[u=ݿi'j[6g>M.G={G{w$Δa=rgHQp-%gdxXWף4H˯ MT .^w'_L3"fz+G0O;ˢ(5nka폮$EɃW#aRl}1\:e<$2^IzikR5-GK>\(3&ڣj&holdRюzRKJٚxbՋ.RlU qJkz>wK"zHTT,(}1|8J!Vә@3;Cc)?|n qMS9j!'ᤳxL`)x|:V?})vS))b,~UT=@,mpҎR B /N ۢ~X! {hmZzs+2+ 7:ɥϡE]dcUu@̡6ڙT^N%)=OO)'/g.X_?XVFKF+|EFRxO%Ṛ ,:qس6\l?m5cjy@0˱(1EV*],$L"H*Gx^—'"1_/R~#M{ڮ&=FrKPkJW2Rb4xS(ފx|]ڔg'z]$u9k^kYZD'*v؊IYǞ*R2T:šE-%w`%$ltI!+B+ɬJ֖{׋3P}zI ju̖GumJ $IN~^fAf!>6ץFY¬ui<9h- QF0[hh4Ofr 8SJQLOYRpۮκ?`8.rGRe T]ՃӻGftsY:Cfp޲[Fc:DG CGC<:JC:Hsʊ\1ϊ!E4J cT҇JRk#g 4>٦n[^g(ku`\`<@{,U (^րr1nTH3򠒳rCy4rļ!ʻ`<3)tEi{wѧfBR;pe8zŤ[םZq'A=ځ@iI i񆝎/eʹGq.Qo'sEhI3&.7oDfף7/\ e֒лDhq0tf|)pJRvT;JV7X{0~&@ j3q[_?Uncwi[v{={%sfBOM&~q]9QIX1qh(NXy0't0VL墕Ż(ʐ]P);BT(- (d{W_HEV] !Feo 0Ip4RdT޵q:GtݪӮ[oż2 ǻAXOyRKߋKE"[0s0 bzJxhLk/l͗?B 6Ԋ{aI5AIj i1 $<=<1|%IKʐʞ Y3c|b2ɡ='D)R21%ms RNzZ%jT +EP,}cYwFj,'X}َvAS8aQhnG|i7ˑdd}@eVR\ L2* i}5łV$C- zA:߼lz{ {hf~V/ ^u Ůy^l~Ju`k<뒮<Ҽ*#{*+쥱GE;toҕyٞtILX^]n !hS:ŋ7{=qʒw$ 1 F$)d'+BPtAw}gΰoͺ-Z PCz(Z;tٿ.xB 7.ƕYy',Y0Jʲ|&eQNA>k4?hm4;6z]}JOF羡 >L/ -z/C7 q'K׹1}o~wZ [u?]Vj%tjmG%4/nS=^ض*ORUi^Um6jT5'\:WNUu}41ˊ<1A/ջsl8*%`; F^ ̼&u袭 ц2<"&}|16av~> Fn&>_|S4WAof.]-ih}=mDd[R7b[^sulɑ]UT>v$M>f9䰗fùϣI| DQ/TUKw3r.deն뱋"e2A{žKu[IG,էe*}3fYţNv AoyEdo/t *ꯥv1BȨD}㺅+YB*Zahuѽe8Ǝq˪9Pj>"Bu%z$h\]" @!4EfR-&:' 7A{(؁7B H9yBZ_1xpIcq&ydB-ٲ7ȑutsqtWѵ;RBڼ궹QnfcxEB +uV KV=: &z#zyҌU%!% 5'Ǭr@PhMI(RAB2y% NrImfĞ`AspL61V'MIpI_\`|^+GES$d`<,0;/Y,E#KZ^ =ݭ/$4"٭.Y%b~#O:#`2l:ptѲ{^/sndd1ێzU]"r6^YeDԔ :LErxXWMDH>0"9fň!o i l'>앵cՌ]4jOF5_Swڻ}uFEx䪌JRNIDqz3%9G%nU|4*V@r LѥڌE(.dU(p^us*akXh}w-$;l=ΌP-kRZ0%r*S{ɦ&v>B0]M' zj ӏ*+*N~5RqX8#82 eơXȃ'~ Й^7#z˺\U:99<9^_5)r爹,+%Cl1['^TUN+Y? , |&J? T~ĭk6U|dvT8#v>;[8D`xt+iiH .\m9fo\;q)2P*m2j ]z`CAL5\*`^+YT>x8,p#w`|:8(? "x[*_)pQ,HL:7J*B=GJT%6j|1hƕ-$ޚq&K.VWT!3G&;0"^ \\_u˒Cq1 0qq}T7O ҠJn*V5 4j`%A'rEـl_ ˎ!xx1_85|{F>x7Y<ƧnQl6g?bܬm6kOzF{>==Zf%/b|45?Mn힥=Mrty"+=f[ܻ˷Agq?'(D_ܢ?ܜ1T߿ŵN/o+(:gnnۀlGu῎ov>Lvu:>wi̤-&9ogKrv װ}W$0_p>as6FvwrBIDCК$]]U:qs!iWt"o=/.\o g (5J ަ_o;`WǧRvMδRQ7'вG֫hs,뤘h13}49eC%%1{!ԩ$arSL?Ǯ+qo0w[{ؾlZowhmhu#@-6Pd?7ٻw/KslJk存!k &UTj1bM"鞤>" Ac1cF6b6lLiM`-Jx_hdgNKQJg<ޕ@5l2P&sTaVc. !ƪbJZ(LQ!(}tBuᨢ=ߥTKVTt?7Wڊ`3<%cNB6W _}0:!辪fTRmYTE))I f[5A5r-si/Qgc"atSt5jI:U)%5oP|JVTZ6BYG\.@5 F Y~im،C&^ŃH?(w1X'4 J`tZrU [j% R" f;drÒ@]0CGV+ JAuTTu&.uCݎ@ڏ!N2`4ZEZ 6u)Gk}FmQ x5&umNWp Nk95eبM&jU,C>}cL4 ( EWRX: F<b46gG8q P a[5pi,"] 2U[ VY dBߑ#h\;#eodj0!ЮHv\wAwZ~W0cBYzd#N,,p ]`&b0X[|E0VKAI!1+Χ/U rlkLv/ uRB{[ 2ÙY)0J892jbP5Ho{`ո 1qi$T &I4H Efd Uȁ l^ hB5VcZ2&-<Ҡv^bF^*Q1k688_WbSlc9i !LX8Lp}2S҂׬UZ. AA۴ZXI| xM K8>` ߕL$'-$2Ý$Op(v5/ :#/Ur"5M,*)XelW8T'&->(28pG_}BLtUK05"WXUk^% 0 T^4'rw㶯tY߶SWǴz+MQg!nnPrhFIQ#mX:2~y(I(u\5W k*?kM!jJ8O%%іm2y@(oNFby68P[2Qtpݰ(!A/Qkt XR̂j=TFoTݗ iD+2;T*:CZL5%fTiI?L9PuO 7u1]a|oo7ĊbPDRuQՀU(,w`7y/r*ah`SO(鲨@cDI6WQ]bwX\s\Sǐtj)b1QЏmupc`feZXCXAZI`%*/S:Y2L3Rr!;^YGu~=ФM7fXhĝ !zS(.* .0L,[H N΀z2\HmfpJ7a"EĉzNqC`74g\X. \%TǬ<`&R&( ]X4(`$* rkW lBȐn<!GRtU1~իoᴝlgg\l]/@ڕM+֭+W~35\ӭWx]W!]xAfTsh3a._od!iӴeM6@Oak0?rZ?6\zjپ' 67\׻f篗oҾ:ĨfiiiiiiiiiiiiiiiiiiiiiivxN~&\LlLFLo&wD{-@yЕ'""ϧ( e򶖟Wk\Q+M8"zW/@I.Mp,u3gǼj%%ٮJ4 7Ζߟ&p.KN Q~ӯvaco|½oC\?brkqwx(-Go>n.qKT6ݏZn?z>:,B?:7qXF eoKmtt=prw6TrD6#ף=u]};%\4ysُ.Mvn~:tn{:g{:t#u1=ue:]w|ꝟw|+ݵ˓t^'ԭָۉZhen>ζJ}{-@WGpC@3woO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tO7tT78 WWޙ{_]~]CU!lR5S'56.c!);nR&[2dQ-! ^ s`3*Cy` M%0*h hl+8o f "nm[L@L`PpPahp`p !\ #!X k.)!B%~!1G:^3wr0"װ/ܢݡ{M;-S9: s?Oqn67~adC~bC9:y'6K!KRRg@TR#ì!2y;[+7ަJxo9< FzInavFx?]%;Sղ8݂7m?0;BsDG藟߾-fф(t++x99: M@Q߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅]߅ZG%y?_jJ?v!eu?jHz_@}L)</yPXŒb j!RiL֔3o';)C"S*~%HFH֝f oVlUw,ҕl0[?wpCY__V~Pg"w쑪UxxճFwHg;yk˙Vw$pթ֏jy݀]L>ittՖ>\= g^yya{X@_iyLϮ#{fuղX[8i_]pĪBjqJy+49Ba"04Ֆ^}&=ք֟]ۧ3CmVA>rJITe;r+R*$P%I`! 9R*]*^ ˓R^ UT^ DLfJ}1|8͒?2l,ȑ4ӱbJIj*Ǟ7ɩm,!rt]7H͆m>yQ͢z F*10qHK{%&*Rcg c q%Bd֍g7RugFҶ~iQ;1؞Ϯ@w.wxaq2m +bڽ}oї""<׶ś`VkD?3rYOq$8sXH,8=tqCQ0XO \f}Y^&}du-zP#㐜czS;\g$d&) _&yͨꊍd_B>JWiH{kԛ!NC#ߏޭfy!FSrD[--w"*S|>]Hz[:3tmS7DZ]'RG[3zhUosSÉ]5B֩H\W`ɻ}@N1(;"Fnsˁ,R##6"jmE"NP/ԍT DFSd;S,FkXdeg#2q8*Ñ+4O-LJ"%?yȉh,os`9 Қ²^KE m -lo!oS~͉Ǟ0f O^D@)I%ggLk1}j>5u$1W]6g`%K.lz‒i^ L*$AIg7viɱ5bO%wHOB0ݼA8e 9JyrGPtRkRe6@I ')8y$yH0>6Xh#b2y @4݋M}41"'14oIG/HU `s^WwW}mgZʏGǩ>SVcW_/[ΗYY WQX1*LedUң'Kt.G Qb (R9,T''"ā`#Q(ˬЇm3:ٺsܱv\/8H.ׁe+z5cq@s8Tv@sQ\gr1%?]OƷ^6rW$IL;^1DHŴW+ uFS zO|d_{&F'Tc_)rFAp8"+c8:| r\ >7tcvCo%E%{Gfi,&&H ̀]QQ80I$ U "5˽)yf_4ϻB_6;ydnJ$Gf.5lEgE'\F:R*焎â\TiWq)% Bj!ʋB})n`P:GzSIGm7+p ^`83@*72fqddlX,䙱,|r"Nqp˛[Mt:<,W8bS#F"C۠A$lq΄$<(%GFÐ=`A%$ؤBt(AO\@%#]I#Ww#~+<fEmQPx_OS83_`1tTFhlV=ɣQ _$1dYD8-3 'J D/e ` \=rTiLf<֝xdTQD,xYXݑ*/#IĜcA::!;X:")$Fy 6Q#Ad$&:łK ̄H0IMKu,3"fqD|ࣶ˵)ufʌbC/ GI9pL\QbxTaW( NDJY-b \| \<OX~ml3giw\0F0OxwKrϾʺ̢qj|5W'1{Qrư^O $Gur#vic  r6GgV#LݙY.Õޜ-WWga_%P0 *:mqkBCRJ770nٺ/>$ GFsK7nFsӀHSAj>Q5ӑZym~go:se~q:lǖ۱$ճ 닷`~/ m-]5Cᬷmcyևz`{szzm#pk]%[󇰞#a¥a4c|Gw+TTMV6#7_9 ow߿~xwٻ6d*JT;X)7]ӧ7KS?kΤ&hăORA+M9y{\EE_I>W,Dw3v|P9"p_[%B[+7KNUrQIr5E5?P  iL \d%鱭 SWV!Zf&U?BIϴ.mbe`FȔZ+N^:U=ed+9u[.db} G*/=ICl~,Ӽ-hRx%є[J#L OѤd2!np6CB"}|.GdR N6)8ӊTH"JFc&=jރ1\js?cSQ Fzn ‰)Eig8>!gYld[fVƠl|"s ́'}!%a_}Zpmm~pRImKM^ Oj&~0xS8F0Q{'ћ۔48F.8o|]ݨs52(PF)lp«sV rfe'jKɋư;{ 67»U'ihko[{IMˋi|,pnw]Gկ_WUmuo|:toZ9x8+H>Jj-vYβ ,`7 g^G׃W =csǩ3Ɲ47%Qgt7%LU N<з,LuKaW DEhgCrÃ/}&>%J9 J+k'F0KBDX"Q$FhAC41"Yʩ)V2""&0a$EV4Y@CEgootTր7}= Ex" ^A8"ve{ƻj|i>]m:?%~| F;6E ogɖÄt Ni Vi#E M21( F刊`) NP*:w|W⬯Gf; ay)^zE6=!"*ѾNM+ ҃&IRTz"HɢwȖ),E~fMr?]dMڧGT%RM|pGXGe;^5p) lR~/-AԢ/6mqz, Y<#_E}쵲2TD&) *IusFVNǫ9ɑoAW:435wsu60O0%].+VWv+9?=h4T6J!^ڠLiL )&YYl`J3 jIO~mܳi.br@Xp-1i  gh$8`OB{-n1T)b+t@ؤlA2ӵ@SJx-]ܼ/r,ǸPՀ$V1w SuW6:Gfc1Zнm7ZcFGOxJN7tlS [~=$U>sW)eTk%ik︯feocK6!]Bfz_.?xL< 3{"ewW1ݲN==ƙ}Q}9![%JF%Eܞ.ޑx-Zùx-]w2P*'I!. RǘGc)4fX ײWhr *sVA-m=3{9A {FNk͗Y!g) ̏܇PK}lu_ή>.r &էE"Ύ& BXCOwTRsgFXb~D*]MvDSή ev ٕ$Jdv1]=CvÔΖbTE%âqf덜l R\~/'3/7)"k |]Mб­<QZ#GFT=Dr=+J={tcX5iGeZAaXɘЫQ3K`CP@P@PM|*Zp˃*(ÚP@mURAݲSμ*)2V֓9yp4*jνSH#3BP5(:QQ1 lXd Ipݸϖd=qĜO=1;ǩHp`X&q{8X 8ިs(18N\|WEhrf>CCz([NC|^ꃉ4hξr6kԭͻc-Ve[&I W8]ƷU"DoûKDK uqnW!H+LkXTvjK!풴u)i.ߎRG=qEHsU]ըtxҸP|&q$%ݓ>TViUglkt/|Z!U(z6H`4snY* #1O6UÏpCWY*0~)wJHobGoǣ+Jj)`ii#9{@5B/o9t嫙_6-K7vmPB?MCҞas\“]pٻ6lK:/9TD\)yP0A@I] K` Dc;_cx=6 8r|sCpOI5 d|7naYm̝lW׳o,Q/E>oCw'-یÑr4wŘoɝ=B?n:G*i*+n>oNEtl}6s(MAN< NHsN!ٶ{f=l6l1v>`6 Pg;\>7Dko]88q3daL{R>9[@,{.^mbN|u MͶѡ9ųi|RQ4}4okzVn.z ) ރ$C Gm$`:9-Eͳ:xB$q4 ˧\> = Ͷ!qlK$K&5yKT(Ih:> "B/S`HucQ{Wr9QuS9Vm]Y ƒUqcJua%.:_{P)UwMz%a"dkf}$_b)!9[o)!_l-Dz>?I=SԚt([A",P#Ɖ\q#A$ؐ4K)AЬ+C(I\~: sޙFefjGR= Be R۔& @)HRpٵ!=gV0e_LzRQ)2r mN@t10! {std+Ȇ2q-_Dɹrs'p~PSjg]U6eN{x4 _9ǨVN6XeO&x-q[y;%,ui.Bn.7|_0xQϹbi:+Qy)FQ+M0EWOH#9UQCpyC- j%5"Z竬:&jYARh )UKw> &yGOj@림zNv٫hMHPb[kke2hO {7U=c;h;8"7B) M8؀&G]_U?{q5}Y&D'ߜV?5b:fp%VEfU U"SMuOƜRx.ſi(8:$=r=ќȶ7> }lGHpNΫG_L+/%wOWmҳ*N99E>Mn_·ǜm$UDqAK7A77FJ&6>5GF+w}&#T@lub `Uwȡ!lXHAH[_Ku$ֈu֩t{v bܰ-yz6Բ=mjK}D36Afq;JNpyɶ NfNJ>:: foW6)% 5gs=Op`$/4mP TpE+GYnPi]'ޢtkni ӣCCy>UYixtZ7KNT{Y=uo(:%ݽS6 \&Gkvh-s[ SFme^:-v[wpZ9pVoYk/ɬ-J 6 YB`+'4+p*a_*_$|ʉ]ḮK/ֆ9u,ҧ/E wKkK_Z/&>s.~c hFN_r'X_!:WNbq`zלf WϸFk󨩝FIG=%ߤʵG^*!_k|[k5?hpDU>]۶8?E^,˷`'|v4ɂ=ϧ˰yDj ڗK%5ƒy?Z[J>Kn&iuTo~STn"|0"iXݢ] q"8bOaD`ǥ˂C7Z <*!Z}*ݡ>PSKI-|>e5'gL'ag2mb{ʍX(-1@PIeD*pǽ{1jde_`GPI@x7-L%;D1%b NτFBXt{H>|{l8=jI-,`V+0}gb8|04N6r$_Q$iJȾJ<\#J]LV&f7YWb+*Su;Iuc EX)ِ0WJ2H,:ijY'#IqN rhp $)^@*4~!NTfQEVxq E-2Υ :@"I[UI`BFcBbkô҄p6[= }Hv<r=Q$uZgsW׿^h ɺ  >e f4@mqyu3g?-qæNn{}4!+:sdaa%k& F9ίusBShG.%Yd\3 VJKATQ^N&B <*WȃF5@ 1P ǐod@hu<ףHYJ{VCyRd ]*:*x$j%aHt91NJ:yLX9Deς 2b 9 2KXG&(P1yĄŒZl8.Y=AX`R,׷RlYaٚRwڼ ۷^49k|[k_=LtNkn?=TU' A)!$ "B $N*mD aMYɨN1&ITDC#uLֳ ^NCTTqGoBd,6͒V)& QƶP( * #sdd}o.=6p~s/\bs%FCa=%?<*[R u$@Tԣ,JΨsUI2deϡ3βJ|R٨vĄaU HЦUGl~#A V)*" \&@ !EeMfKy2 ?@K2J5h b,~HBq(Q%Q">E< @ JBbJI}xD}6DҔJucS`)}0(94;ME`]AXl8=jrqZwuHbT\,\V\<ŧ(,4=$iy$q8ųHB=Dx"QVGyh{aNgp՜kvuֺ:nc' y? R52mz]`)kV&FU[(u/z}Rs؁([o(TP!u}΢{ îz?=sxk[VN`:(c !j,J4JQK@mrǜ VIQ&[xeKPO{)\Q}RJv(B29#bA CΉ#OoGHhHz 3q9j%=D%r`xDmS #m\/O9A($!Y`M]1(PAxJG-DgRrf~MTޅlȴ:v^Wo47s%??@Z.f|1]aWwpSnΫEK:{Vn53ةWjhoZ8޵#EM|_C֙ vgn32-ĖINbԒĉYd*}"m_i ΩFE `=r~4ܟ ޹ۇy5El$Z@J8j=!٘CIpr ׭Jm*sP쌊Q~C }.tCgK!R(_oUbi.wُ jYGmr湲V8Se\L֏ɞ5)* ezߚSպ5jMkiɻM,0{JC脘;Ir3]#Q|xoY6KGxeðX:,6NOQɚ|w;Llߵ&'Oգ'lesUL8#y`n1W'bゞVwtAg_;\T"O1=nv/(~|7WwWo~2}u˫޽/QUe$Inx2\К547MVuκZ 5NyɸOjz$lOrq7=sa|Z 9SVm~7J|EIS+OwA1Yل rW悒WGhb4$&Q/]IPÓ 3~ᚇ"kI4'(<(d$0yE}# 㥯`H#)A1m(BQUq*!Ԙz%ͥN-:8'Wm6UiAf7O5mR5SZFR=q1ʹJ&bI3;!z_KcFj"dlT!8Gfa(~=@R(mpmCrC mDǎh} 6ld[&VƠL@E%,sv#XӽKm{ל4w#r}NԢ _j'5wԄ/J$HY墷9+egFij*~P~Q.hKوЪ1oy>h'?M\ R6Oۯ˛֨WՓ_˸5~w`$|__~jޏQZR >>F'Y.[ȍ<'OWY!>^Sj32W%Q' n/Z Dߪ-C|ҁ *My*LRU9,A~6~4ȊOT%RJ~s[d_T~kCW}CPg!e[nVD_mT`Cf^E>F;A::U. dXNBQ#as]cnQ B|X8;9_@3 <* { s蓹t}w*qwJĻ|N4?#cg.\úbJfu9#K< 2za l<^/Z~Cqjb]\e*gTF\q%ABW2W'#2dUVcWJCqŕ2BU\!R\ UV}LnKW9 XzX|Teol嘡ӼeMBS -YK7K#>_J\BL*^hm| 4+MS8]sl3ZGG%T5kfgJh) &>%h%sh"X %ׂ|5 2o7v{ ["E7#lRɌyO-2M?<{r5!WZ  T2zs&M߂ؿu#MMlڰmVu 1k:k֔z.hCL$s ^Fs}c$:'-u}sϲ>sv(WLoC߆zqi ];)IJy5m\R w/{wGCO+ݷCgw4![0;8֗l\Bf~^=?xW-,o3A=Maޢ&W bTpQo ̑D,*Q]`NJ(KKǘOs{J̎<3=Y[؞2VmEK 2IJkX2T$D9UH*',UkW,3jQAԎFx:E DK VH ӸFFrVͨnxf ,t8󰸻9zI[G2X:s<Yg«hB"?(vURB\SʊZPaF DxY @8e $9[c+8qк6#vAˊ`U0bN^5DR !@<$-%ۧuT\ lX 8giXn-GQ?wZ=t\P~J>`s:aH.cT<3s>SxοHyK=Go;tdktdKz6u.a"nR֥5t oO_wQhϣF{Ts/bŧ׏(2Dш@0CIJFGh,nB P ,D}Y-4;I|&GXp4~i-J*9Lee.NvZbGeV{ػCQͦ6XMx#5PW\s(vNYo`&ڽaAaoq]bێA JXFLk^%pֱ$'L4Zj5*%T8PJa} TEkF ~T5*JC$UaARZDl`Ξ"ȑGcЫ =-{p$jqAhIp!1e-Ry+׊U6F)!551sD Dy &g F85#YK8PqLCyNpgTa=+ n-1e=ߠÉܒ@‡0ogF1޼noXfr<^D$mC#g3Zu#קCtl]`ru1j{\`M6mۭ^ͲClY`ˢͻ]Oz6^yڻA3-WC]07n'gYu#c2̺ ^Ú鏛|]8Tfʪ;:J:d-}Vsktg)kYnA5Mv7 ͥÎ.i+)A fO#64hn:LԉK: J R y:0`)kLHڅhp`2Qf$/\ŗG<6=Kשs@7>Ǎfh)fШ#HHZR"j+F"E"d^oK ӄDt:dE&t1e,S0jcReH[oHsܔ_"y%{[1:lޞ]rIr=8ɺ_p]׳ PdeUIAM:Y2gu7+H,ؚHnݙYdeD%\9i2_AUB`1sk􉠉h6IoٗDbŒb4\vٴ)7?rۄlWͻYNRRfw˲Ne',yw?ʿ,^zv!pOq4R?Ů4:ca%aN\"fӾݤ{t8XYY"^*hHEcD8X,җw Vakmm' ܗLdWNUGk"p5h Ҝ";MpF:UʾcH`y>^|5Kt۵gӲw`ZTo⭾V}'帣xDKNrn7Ýʻ^&.Wӷ7o6~X(|IkNaϳ?y!e_bw}K&gk =˴ټ Izujӥn04,D nWӍ+mapjٍ/'$(}Vƅ{^p55ToƓgfؐZ=`p݉X*頙PH,\Ng 0Z谾Jz5w VvRޙNo7X#@PL{i$)KހBT9dGN>,cK =~՚fMn ʩGo6"@P<ːvg|3g F&RRtN+::j :8["$c<B@!:WDc\PNG>ߣ_i.Zρg SVS-Ôym0 8nkq8C#<3)0焵rJDNh%7o~"[2NJL7M,7C?t~{NLëBssPړ LSkˉBjrjU<(E+u𙨛0&~P޵aj`܋ :&^J&qp\3ӆ& eF@LPJ' MihܞN>OE7 [|V%א}5X|L"״$EiHB;!f9NVGߣ|qQw +jz[[F]y 8`L8Ư3a~!lW)؋"ξxs(뫑ts)}##:(п/ lqFf])=;M׹'|/s[i' $3KҒ$Ί¿9) t@LC;ҹ*k*W\m?cݲέb)|òwCOMGÙw /jy9N,N ~S/gPQ*>s|vjPP"u_/67W/T;< WtEp!׉_+#ʣn%.[1/x1#-EIQJ5Y sYd!")-0Yv˔97P9cHe K_Yz}2-[ּoޭF_6 n[اQt$w%k%ksiTq:?^H1)d+:ʜٲ*it9 KȘRc)ޗ;&EI(βI J;Mp&ѯs7Zq7zw GI@ F2Ȳ0{;ellm5.MONBݗm!mO}SFuov×'T:̱^ z3&y΍/SÓT|P[+F#*#]WM-V-y \{)̷s>y=31Kq|7ˉo@MˆV-XR[h #|I]矑$Jݟ/I4/>5gn\1-?ـzE>(^ߪ(t…ʖ8YB-.ujfIEhHu iYGZ:% \2J3'19R,# xH@xZFY"̬,޸tY Q%Qtu~?j5-_xd0)#<~j~6Bv1%Q]zXnsj>4қ!U^+=Ѧśpv<}H2F$Kk^&Gvj%||AAv8@Ve22ًYֈkVb1~ӏ颈Ϲt$IJff1%}0wӽ Ǯ!iqLlvˋ}}}*R(aJfFeEF~LJuh|RY?bd*!R^_M8^W/X3`NwgnV0ۏΪ2lh ||QgjZudb?+SݯwՍ7Us*0M`]sm~ b[sb$*Oie} 췘~fDTH=u0}ZYeh} +3f1Af>ѓ嘽t#hlY%r&p!H0la,8g֧5fm םFWqt}~s/1Qo7f`l gm$ȇP V$0|hKZ|fWg}qo-*rKٞ[lC7]oR-V'?ikFK+/+@6,HyoR6-w>w+ȇt3Yȑv|P@E&}:O^V#v~&wTL+u%ɱF(X\s+pͥBcSoֆ+-'^hYZBIϴjhbe`FȔZ+uLt6:xB#8sCr2_U/ )4)g>`{nQq.`9VNǣ~̑pKe$"_jY"MXflxneH}{dtȴ<Ń<1 0vY4a\d4%<נI@E*{oW6Բ(0wAU(3f}_ tt _ B~C6Ίb0ɕnm~7| ZK.#ƒ;O\h FA/p8%b"`c1g;JZ]5my8X]~hawQ+GdyPp^&׏9!')QlΥbSF2`"[Yo^ڝi3~Ɏ#NdRv9k(vMӹv7J,F=a^¸dʠx sZD)Z2li%% ,ӝVi5O"BzdiT\ .#v9z(@A#}FVF I~e77 @H ph{{-r}֌he-{@h+蓾4$ ϯ >9"U>}U:Gjp坙E\)\7"sztd6#WG_~AKaT0uQQ&( $`Ϊ}m|yuQcWn@\NBPغl0ܻk6MqW{{=ɋip-UAWv(~/U~]_u͝_|-Fmt6Lj{oY]t:v^:9gpcq3&j/5__#EֿJƿ&3heb%cV@NFi,UT-VU8t*śJ1 kB UKubO9Fx"rHch1*C0!B*Caj3jX$"﵌FMFSYaFΞ Mp7TJ-9^i{SչNx8=:\lw7LuTfu p4YkDn%m2tB^L۶wz&g@s1h]BSn{dG6;6٬:%,nݿm{eݝ7=F{Cϼ/oIq6wsSoB\Mvl9Ŝrc6eWeW"Uͭ3%>)"'~慿A41Gȼ<0KF-.xw6ɉk4Urg2YC~Vs2A4@7Mm1JFW!f:Hb*rV1 3)gc mjT (uo|}ݏ>@ێGzU5Ղ/Mq^ǖjE61e9CYƙA8peD@Nq1m'"/o?:@;ϔUDEy-D@6D-AQ|2`$p,m'vS^z'~{dmOo(+' 5 ЄE'A6 o,/FACCN q#i4v'W= /i7nw7.Ʃ"SA~F < SFaN bt`2-?$rpLo9nY2$Jk7 Ћ(9A-U4٨CPА^BoŨi\5pUEdqQ`|&洛^^.>iW`sηFa  7u(57- ſ|g }hyIt8\[CWlTRMґa+|5؇d6N|ějLG٪n ML5U;Bde tG?NJZpeÑ-eE w/\^"boul32@ jȨ [kyhݘ3nn;_9 (U:Mzt:M_&zxٶ)q?z5/@xBƐ<7[ϑsm0 zt(esZp_o9O:gѸyUҰG&L^Q9{Dy-:6~foh w˔龹3gŽy'aeEODetCm5H2ܩ;0۞4FmD)NDzob 2L xA`\g)"l.YL/G(AYo@vy?B)ȶHIhr"`9a%fX1Z `ce'ā.~,O_68jY2}fƅ'/q4􆢔oۓ' yEeQҖ#tL%%ØG]m(*m{1cWb A1xi}HK{@{w&㩖98pP5`- sEY.J2N@ֹ;nuizJn|(6^}~{$Ew%!PR(zZ;[!Z4qJm#K: ")"f!W&V)4F@apK <1J9QЗ. ɛ +p$*i]JwXo7v12HzYDtz*Υ Р[,X,&ki:MEsʨ%KJT)l@mb%^e7):`~JPz49fl rrUa\75a9nCgq[3ai sTŞQ XS$s$m6 i/`̕9g0+K.uXxT2y&%EAa]9=m9?رdГ(!ԄSsTa0G EQC-ө2&oiVZsMS*A7"/;fM?\@qfS81"8FX%;tzP&HU +{Btm7佝u~% 89i[ޢjw~; a#4fӯxO2*ӎ RBCK}IB`O-V[,.:5'?O Z*L B%1I$3Td@kEV"pt,R +.OFY59Ֆ{:KDX@XcsJĨ2,M,F1˭F>Y!A XQCvYX$ UaaR"2/l^c:,Ǡf]Sp'@s@l+^V8_Y(~VڬRy7Ni2AT -M۹<0Pq)% BjABhW2 %ZM@y N2C>jK ^22xEʷ͙ rKl얌J6Y3<,,ny~H`:yF>؉WQTk8r|嚚hrRuHq!WI\!2iJ-LN-!㱹:S4"$SDEOTdM#Vye5)yCozy ) ,1b3.9tѕ*8,(4<شj8J.Im(Mh!(N[mDD<`wT@DɱEՋ$'R>''6h cs:$ Fca5Jg'C֨wFg(shhx-]p2$zI;*@pXJ~ghxH ,Ch`8U1c^(6,Rx#H|$Vj!`Ju9m΁^ B@ 3:Y`ԡ:Nvz^51n#(Luq0_u'alH0D@XXYǃzH{V|`3fgPͿV ڳG+, "GBL)9iX<ڣ>(/ƄfIͫ޲[>oJ}*!Aۛ02a j+>ϑϦ<㹏@(mtiBP%(O8 7-7m@[3~WTq). Ut렜* Dܲ#1o޾ 6)A;b|~[<ࢳOcq4IrFh.&#&ؤĹ&MbdFk lgek'AHx8rxNxÌX,5%Pa[\\6NK5p R{y?H[f # IHT+Rd5h.j)IxZIvKnVm[3ٚcure}oqZY\{RJ~,59Ybr~¼Ӝ$sZ7;; ؓc1wn7AJl -ZEw?8o 2Bjs|&9P@ސ!|\Ոr?It) ^@96(Q<8j=!٘3Z#pmϪYAq4)Uom 1[c;u>~.lE"U5{^B'˫J l׺ ?۪A^P,vu]}}v+rݘo;u]Է5ڢW[0J@si? Gbxn[m9Rƣoծ+~ƽU%1$!/rU1lu1R,fyIh\`nżG =-sx9>T9r<\꼒J]UW͔IT9 8|:7XSih1w5\=PcaQ}ëw8tt^?ߜQf^7/qQwn)o~^:KEcyq-]hdg FhWJ.1;%@-~(0ԇ^7E0 LףPW6N:9pR#*Tv󗡢Ջ"Z$emlS^Y͞/mphy~a`ʘ#et4sH“M(Z*NGaUHX Ѭ$;I\%|ccb۾8~[w9NNHun;N C{>&cnqu АH9gA`C* 5?pRaw.+0c-':SLr>vhC:wmNڹ`JUgwJi\ |pVFJU ϢR5W;lI.pJ6)M{j2.HS"*cA6+Aùa(J !!j{90C.X(6;mjU⍕\>ypGS@4/6ջ<űWGQGr"ML0e3~&]3h$439B?Lt p ]e3nq*Ԧ#+(.y"`MEg 茺RNWA4T   Wu2Z m+D)0U11,b*etQ p4H 3:CW"] Q^]=#kv=.59<]mXfp5;,]m6BiH˶t{ڶ)7Jh5k;]emsdI +JY*~)p ]!Zkw() Km&KߑDR3Ole -C4tMWh:դ4QB*]>x3^Yn/&v؆b\ # YJ(6mi"bkF_zefTB !(Wiz-m^7H@S D~٠?~Sdcg}RN(VMU 5;;ڳs>3"zldF|?rЋ2#D/oN/N%aFV NB Zì;u2w~;n:yϮUd}ֳ묢dJ8)S8QK+CcUZ؄2;13ZzQ2қhbKI XUk:CWv(istKn1p3g3n1V֫+-Ѣ1"OStpQ]UF)UOWGHWF-d "`Cc f3t*v({㣤+PBH!Bu2'G]e*^]=kvE {lX\]mfhf(Ű JtmSM ]!`AMUh%2%_XW]+ ]eFutQ*'%k T JN~G B'P7O+MgFt3\ M#Z73J1tlN:DWX:DWmR󞮎@2 ]I#ȡ0mm=]= ])`dg*=fhuTKgz:@U 15PBEW*+DI4`OWCWѴCtAg*qhk;]eK{:BBTlӮUF+L*{+f%Dq88]mXCfpfp3@f(et%7+ն];DW9-ם WUFkL Qn盛'+ 'pUg*֫+DIIMU'S06b$hke*&xA;9c+yg2l4zS t1#H6ԭ΂ɜTYxڝcdK\ߝ-Ϸ>Jn1fDl-dӂl-d[ "n-?{׺ȍ_E"e~1f .H&xmkZl=c a]䒥dԒJ#nwH~<<8?#uQW 碮:^]Au%>sK-&eUB$>$>3_E r-O]]Jz':'vQW 8vЪWW{Iu֮`F]i碮ZJN]]%1ԕسz`}OGM4Ďt'+ѫv='=I? .;uJr*ԸWWoP]<'ugc{|Z)O]]%gWoQ]QD#, K/!UDZt=ތ HM`)GM\ع鄖PJ֫鷨֍[4jjh%9Hnf>)2a}ʦ!\(9 1afT a4(qf=gqst ct7>ѧ18-hp;Yv.P{~e[cR;ĬW逰 ߜ <@UvLG>K>,G4DBZ~:rV=K9xjLw5mPjdSÃꣿ-#.t|s;Y\-y4,+Ri axfnOP9c+Jl[3Byp0Êg،>OzՄ( r8WzQF;M|?jy.k#2alb%L`9LgDN?APEuj =%hìriY ?RKF%5Y|Q2kvx4.p/W@p >Ug=bp s%԰)0₰m}7,/ d/ ZƚPh߃/gF{ խ0{O*F, Fͤ*qQ_}h80i64`mtϲtڿOQa~=#P_Td_-f<ع\ơ}Ǿ6 4U .͘w0#d1~ \hAr˾ɧnK7X,lt6>$~ &ol۷Nd=rhTwU<m5&µ>L8La'ݺfZωbֵOu*ֵ.XgO9p-JZGK!XL8P=tH=؆ NVFsnֺĽվ>'|s5f8(^}T]lMj S#Q=JQq&sTdEdZ D`9a qQdn{7ڰ)1}l'> |-i Kp(H$jec̣c'e$ ]Zv画!c‭iU"bK(F[ :aFhNPƎ];ǾkOh'^"kg+0?uBI ߾yE[ƗQQkrD\`,LbͶ|SI= hy[ n-lYͮ6ILWW9g~Ǝ5LnjD3%ɤ {n8¼ߨMNF/\OJ8LJ`9mNm10qHK{%t*Rcg cS(:]S X`T9F$㑅H>HԔ0+XD;^S 5Bָ &?I,0c` kW}4Ƥ_eww~N`VkDF2Vkh$˕bbx+3eB("!,ȺtѠ {aCRDolB$@PKydlt)RRapR"!0,s$#+`5f 7.́ɻCu\8YpDT&,: I}ca|1 F6rjhIW<(|q{Ӫzz4FY40j;4R\0#q0S*`[$D$n8GȚYs'Fb8u ,ˤ3b#k0qr3Q٨XX}`]-cv_qHdh'- ȎRJ< *z"˓╫q`߁jr=h*O% |@ )Ӗk%xy/O69۠%޲9g\:D08~ڂ.Ņk Q01E0>'.y`UNi3l9`|!oފWěbO(/⢰rUp;6-L?~m4/M`?-r7.}V9 [gnДa7_8M]`X$}|7o <49.8_X!p[0q@Jݕ3B)auD,vbgm@+ G-evW.0f2mکJp8y1E1֠f10'A#Z?. ~PQF{e*.J~hD3^~}x^WCmeZmJ{3IS}ȝN6cݍ&uH[}e僪׵|=~z\mmR>.mj꿦c<`k8Sr~>-m4d>09C&ڂ2>sd|GiWZfmSYUCrQfG̏[J[;24~[憊r"csx ,xt0.2T-WϜqq})f;m#6"jmE"NP/T DFӒ6^YiA5NG6߀9dg(()I! ' \bs`9 Қ 6VJ$ŏYBj?uq3XYF{.n:[;t=+ώyS‶Gqxd((I}(#rQEvöYy-G9ȘJ*31:#:/QTS҇(N@ֹn a &L(jS֨wKy'\K҉#FuLQ;硾6=! +/i\ |>[v}sE٧WˬZ[!Z7`E@%HI Ft"f@!W­Rii pKNW aU8Rn+帓F%: y)z8K$5X,\ ‘uуLaAX fEy`@$! Xi4_9ʨ%HJTP+f<^P)q/'<+vgG1Lٙk4[ҞRRi)8ZƧ*"gtvP f ߂澧R/R0ڜݷ ;|N xDD* hNk(+d;˜BTS ?S#['li%Vo3=xM+3!*ɧ_ ipx/!X IņH\4ħF嬒ƀMNdV5݅V)6BZ|7IVv9+,vo{8 @U\2's·"IO1Kgl˝t.lĻmRiU{v]e¹Ћ~&PfW@qXק7W>:TL9ۄ>ˍyVCUHQ\挬sE_O GϏGk;0M}6^,89w#sUPؾ߼ yPQB&sݪE_Y5gނßҨz^wl|*P^ȳ܇Mx. =z[[lnE<Ǖ{x08U0/WxW_R/Ly-.j8ɷyzIs 28.p4ף"r#/_3Gݼ륖`vimC=8rg\/ڀ0rE %76PRɇh$!mˈ@=6ҼlS)E4eLYbD8qpSRtL(#m;Քy1t[lOO$(:.# YKbq$厅Qj3$Dl*ȉ2@$Ho!ZGPH Ӛt"8 jcp~,:/Vt6o"hCɆo.6/zc4wIʁQV^#J,{fDK}y@U KAR-!nV)$H=k 8l /b ԷlBjcflNWi rh EDž;[g&ӬUoyh|=xuv0}ǓK dgQ0.j!h?I[BpT rdZάfc3dc"Y0(6T>o#KPTBێh(7|L+lSp~n4 7L̶vccS֖kw!iR(B XLV4l3NQqYI_14ʇ 5 R&!p= &$4RQxE.@<QY06|XsLaĆG[Qv1}#GGꎊL O(jA@| %&yОc< oBhGe8Mg\p^("hISU=y 7ȈY#d]8mZlllʋ ŎBRL4pH*@yyNӔZ$ډDkg'\h /‡G#|afPXb Zs\Dڰ؟Fxurϲ11 (^(DɣQAFz8qOp]Lss۩K %r lfcj;fFƼ!dd%EAJЍt9U&Rf618kz).شE@o~F%N)ՉrtŦz$.➰z`t~[ng⤳xZYˀQf,D=IN`.ʘ` <ct"HRYr1c=a^@/=_d{s,f8rD9hlѢ@AW? 2C3k>|tFC / _A/?qp 8L97%J8lG(Pq\nvjrx~GPN2|qCaTS'uqi+*N&'7/`8e"ZOa_5}\g㋪]Mq=JΖjTcqN񼴆VKTNnN>.T̾#!sai?T-3˶8R G÷LS~sNȪw%]*.F,PhC*~̭gpp=oEr_QrJ*uU]jy2 8|4q,FȴqP KT~YX.|!:}UËًW΀?}xga LIx=m@o໢|v5mh,o^4Υ(4Z~'>LX}r+bcv0}ػ/V;ye9kE Wa(+~ ՟E%Y|"TǼ1^jxII_[q﵏nknpC<րaI3E4*5jkGPI, XpQFzņ9啕 -?n(\Znc>wXF@Kũȭ k D4+t-L :aMiwtCvmG zGn;eyכrLPEiR%dKͣ+1$SK9;ȶ($\f6r<>j:9ʝx] #Ƃw q*"9}*;@J'ˁxNzcCiŔq܀@RQk"cs: @a18kꘙWA_M@碿6}ʖ8?V!j[ɧb7ZB.15>x4*O!B8$&%',59GAl|o@$kd4]nu4[mMr\Ҟ-(EYIӑc SRяF\[:haa)81β|"˒)خ «{ Xԃ2PIK/" *ZΒ܍%y-R|rSr#cLoW.q?TL?^wRq_#{.ح~Cx9- vUW٠^ ;HsJ54hhNMCd!CZL3#lΏ,`Apepm8I|8|ޔZäY?tyBۙ0OdH@ڎ?L6E/> !|ߍP>ω4Wq;>br;B\UZDKB& CdED\Y5܂RFzz~lJe y{$ ]:{޾zy,j<[hNC7nY^T}y5c{;X5[ԡ;p\o (N߽)զiHH{>"qMPQcf?wW>l*54GWm+D)wtut%Nl  rDet(5Qmʨ*fo R.-t(9U]4tzuL t0ꉍea(n]ЕjӮM8= XUkȾtQ2՜}Ү2`-.#b_*eRttut$_ Lȏm7P.ga{ Ǎic}0ZR^GX{_NWIh`,%R)(J\Զrn:p/v5c1.Aq^,UE; x4s*E}iq<<6*|8O~w_"E]Br9\]Vzs`77\ʌiTA#'O̳by/Y` 5㬟_߼|~$RBx*'t™2QsS>bY-sdPՐQ -ùr1bMPs8D#-Q Q^kjE4[M'3 m:yg\V2 tݹ[ůeiH˱f, L9W[L-ZqMhi+ifij &^h~iQ.-pC2Go&z_*a7OWXiWX1uN7kW-UF;C+Cr ?bPStҽUF)DGW? ]5^|zcA%y굫'v:~ZjmKt;ڴ0r K?tp#{k2ZFNW%] ]Q`.g[BU+2ZmRPt+thU#cX0(Ucjz#ɑ_>d0H 0l?Uڒzvw?jLgKUN裔H_Khڧ((܆hz`VhzuizJHlI얮Mm&ܘBW@^&tJWHW at7f]Mn3ED+jtut%x%lv Vj &C[tT5-kJ̵+ NN1Ut!_|ǃZgj*1 Sε+Y;/((]n*=q+pUvk_xǃ PʁL{URzG ŘՄ{֛CRҕL1nX6CWe3te7bUWHW0 3Lx%6!G˫hy 8x+4 l^UNfiWmhZavI'n'kX|ߞn'] _K!Ee5d^a3u׮;v{͹`{s)lAvFtk)`vND[)'ڿSX~}=Qh}%Y-]y;t5,h:]MD[ 8mNW-*z#o& GZ&:t(bp|[6CWD&;*]] mM~+t&Jt+k6ּ2!{bp?^^n0/KW}A*y:]tȆjp ]jdR:B1xޒc ]M)lz(c+G,>8Oϼ%bIҒuv'7D Mс4:ۨ4}4Gg6DWp2+M/lOUDFGO%uֻՄ+q+t1NWS: O^I@;`?~h(ǜ/.fDfؑѴCo}>8:\wg犉gańrL~w1{On[sG ~o+9u~?G YnY8E :p>#3#C;=i^J+ __+s ~zG9(5\˞-$%8S:'TRv'>/n YE^_#moqf`.N߷gŎL=)){g[NHwL( S$b ׳>?-0LK%r81Oȥ4[F$wEM61\VQ:DyKR8U?C*EysYU)^m>3fjm3\fFu}o)5&Dɔ{5nj9o3Z|1 66$&X#ՐR@i%#BR8维n} dВiܪyZ]dBTчl0Mɺ(4pJcYB (2"XXgаH! s$ZGvivt&kr@,/MV-%1rM r30أYԡC[By9 4ed\X5 "(g.y<Ī"mv%>,d./`ǖژ[ø$,DIU`*.t-c6tώu lGCVj.z 8E$ucV [W̡m+"}hٻޠ8*ȶ%8/H1Ű&؁~okuW['6\m`6T tm #FKZ, |#"R}7ii[-Pbbs oDX~6#,%! [WѻdWr0wPcfH57]Ґ?<;1m1 DaHku@ K$0jM<cn-$"XY:M;&q b`LΡ4*)ԙMBFhzJ `ɥA!\e+AtbE,(J892r"XT5,NRl eZA!N ]u%8kLK^UuAT"FzmRGAsyǒ`;CBpD;^l,@uVb@$Fu{0H6C@#HAe!:3J0W^,KLW'$'7+ z0uJ;L'"B|~ ؼ-i;/֋7t^<RWIQwu`H6#&f3xPT8xiL`M6JIW6Xg OPä2XP|`QBą5M6U#;KTf5OMãrk<[ot^I"`!-#ZvHf\"` P}AOsCgjh,f'+%@X}8ݦ>|9Wfet90懷. 4z66M uwO(8w6xY:Z;5W;ȎQgB9@[Fm1F~(g=<>iUFQpPaRDI$-ҐQUr#a&ߊs":U'%\cLv R K{P <|Gշn֛Űl,Ճ'+ID[RĨX1wH^䍡"@ ˼Ca,*jKg:j#ᗳ^dT],k~xkSژbkcc2=- \-ڼQc R'FlLzb2Վ _:Eڕhgנhd%ɿ_^Loon>\~M62zˏ7HzWCWޤ/nxvҧ~s}8Gwm}s[%=yn}&Z gΘ AaC> f;. m=ȡ&ʤ>QJ> H}@R> H}@R> H}@R> H}@R> H}@R> H}@,1mI&\em<2(}@<> H}@R> H}@R> H}@R> H}@R> H}@R> H}@:Vм$h$y"7Z:FH2> H}@R> H}@R> H}@R> H}@R> H}@R> h}@P#`@mr 䌨}@R> H}@R> H}@q#  v(!`p;]M?N+#K$ό*azlMأl[fWXy@9(<rPy@9(<rPy@9(<rPy@9(<rP:<뵹 ~ TSu=>4o<v!z/f=R@,DIm_$%kI[8˥iKɜt iK4=8'+Y2BF+Pzoq*ϸ:B\٤pE:B+Tςq*MW*{+ȴB2 > U헃+#JVj#QW/J0jҰ኶͸zl3" s&2\Z)+TTMule_\`@&,\ڧ~?@iiquD8I|"l2< T?좍)~6;j3瘬k8Yn摰""ǒ!+ ~ՋbZnNq6ߑeX11#&(V]}VT7/Vi]MR`Y|5X9Nß3?XOf0Ϡ{©/W3|;/W<~N{W,U]($&b(ViS/Ә4:BU,P— [[5 . ^e# ݭ9%b|79?6]e3iӱߝaI?z-" /pưR Ȍ{5Cz9RU)h]QjRL "{QG+L4Oö` ۏuFVBD'-w^v vn;1\.TB8bJVt0/Dނ?[2~6'y~^kbU-},pm˄Џ7Sc&X@錃Z)>*8Q,HJ- + PJRF;P%Lj+E)M_JW%KWVj_+ՌڔA KW X 48W%q,2JW tAl*BWR`qeq,S \EI*¥hX+g#‰Oo%CN۩ک[zH*eWrgJWz K(d0rNӠVcUJ1}P dJPK;P%A1J %NW r P}4ٻ:F\lQle2Tpj;PWG+-)`) dAT{ JE2WF*S47(TpjE+T)uJHW(X]\CRd\}9lz>"jmV ;U;ک=~T኷ϸzlS%ii'؊dpr 2w\Jqub +lMW(`+Ҿ Urqu,s5lĄB,xlV5^0}KV㍙=y]"*#86:lj6)\Ѫ&SU6Ts˦ VF{^UZϱjWC (TP-}Z@R桅#Zh۔^܀`!x2B2̨V $q%R$+lIW(TpjG;29F\)CK W(X+ky j%w*{ĕ6܈ŀ`ФcΤcT+Mq*5ϸ:B\ |vru2Q}lѓqu<ФR@`$\\SB*x`ӋPȃ㪕`=vN.?nԪHS{h+qئVJ0*\\ERզB6q0  Pe R{ Umg46~R  ;fJ@mQoMa]`|~>7 T|R,_PS]۶PGŢ*]Mpɼu # =^@04 U: 5Pɏ#|HEq-IW  P-c}"qQIyW(X]\̫PI+TɲwuҔJw!OW(W&+TkHq*󫑣ĕ&`@ U-3YpeR*%\`Md2B<\Z{ U ~9lz9"ԪxWC/N9ēVj6vN%N dc28IW(dpETpj;PTWG+$W*!\` KRt++}/ TƚoWǃ+Nms磞m, NԌqo3Eۏ7h2% Q`2&iTs|VLXm+;~=߮rXf@pݦT< 4jZa 'n ѥV\[=ג| )Vq Vk>(m;ڛn=p~V`>a:ZSL#{}ήb#8CK ˛˕WQ6B0 yYh< ʯ obNn^rLO MX6@m RP.Sᱍ j)nIxٳmlInpLMft*Acݑ\ȤpS dZ+T3WV_ < _ zqj ܀o?w/F !ƍX߯?z3܆&rf8䷋`G8&v%28c' O \ ?7TP_7x݇\ }!#h ś۶bgY ކOOկ7*2<3,S%xABngN5g:~k;#S{Wkg9܌B<<~(+,&+4zDtpť^s^6RDƩP(r5;=mCXt7ntU_tacr_@?Zf:_Ŀ\yT+Yߝ5TDv֎Yӂ "\\JD*BWoPf\= 2)\`EA4y~( UJquB=:U C P. Պ U` •z`ӫÿhl%C9Njhv*EςAW*MO%=Hk%X j6\Zn+T٘ɗqu<Xq_e\iR5BVe\!8#yG73 lȘۑ`V2!\EJ$iX:F"އ(!m䫭g%Xr/=bx}6AƬT?[9aQT*J#\maoyS-%>?ae"RZDMb!*RYJ{C(+'?_uW%p“Ow}Z}?ί'/W3(N~S:xuϞ~p_L7~b^B糋.'M~?..Xב>nHPiRmƸ3B oJrI1Y J Q*RN'*`R9U(sEw:'g]]<^1hOr ޟy^HdR))\2@tFAUpQ!#Vc\dU_5gd4j{`9h.a@w]lz|\]xhG֊se^f\}VTV[kt󦍝ߗ]9Ct\vuK1 H_|:OJw5JIgsPnu7޸bh8Վyjو*38DHwZռm|gS)Zp}tp}R֦ f͛u5ZE^ӵS6;ǻ5o=: ꊌuy縺. Ve_y5Xʶ{h(t:_OtD,t, *0y<(7?@cclpeX jXŠ͇zqDb 1cAGtaiЕ/S~|}N^+ –(#XF=^1zfLHNPDu>RK^g31%)VpRi" Cťe^05%# !JP^yWVB/HyUiyietT^ggr:k3nhWt #;;OVRjɪ@ėUYh:*Yվ4!>8iG hˈUUuAFwI7J N(*+qwQ[/) wn On>.O/vyt>(d‰y(NJ9lL%?)*kOQ XK\E˲UQm`Xb!6߮%:BlGL*ӦW]ɹ|!f_ݙu<*S;{C !@WGʼ%p6RYlce+TBFEEǀ)#x@%2IDRQAk| N RFAPtYv:cؑ}*1s/p9;:p#aQ"TJZE^qB<64%S"ZC QZ@lFZ|*(y0"i&4/5muHO..MXggVX.h23?gDFh\ xNqRxI|T'c*g.>άQ<> Kj 9Z⊛0wvP2Dqh|};nI2ǻɓaǴ\ѝYU@pa$c<>e3]S9 adYJhѰp۞#QLuzP%#JB5q:Ƣ hM;"9I Yv釷N v? QY]fOYw%hiLAJFo Jp l 9Qцɧ`a?/cXiĭ`1xF )HOHX'x-V$N2&J a#T.AnЃ`Q2M9qh5# 'M GMbZ&ߚ7o?/ׁMb&EM̩/L|rjpO2RsycK(7t nFbg3-mژ&Ӛ?Q,R|:9XxssmouɮVҗ7/v#aI/Y|F<*l`4m zBm*Zh*Oף0>;'bGz㛟Ϸ?~޾n޿߾$Οh)P8"AߟG7wo𾚦M,ݥigߡ]k;ڽ}DnVfa%r $ĻP<.G9h'G$Gq Mt31UT:URKNc핿+K&I$_sFʯ\ݪ]E_#kw) : Vd9J+鷍Jp^\#WZd<kń ADt66,Y^U/hp 햋E[rGUHy,wX22uBbFDj%ÙN/g:8?:gλKdgw@m߹dzV n;OCʺsif ;cD"+k &_iLOcܑG[D;Ÿ} ;]8dar^;a)V0vҁ:^g()dřz}e L ͽ Etq0h!D`9d h8=#$ER3덜bud.M|F0햐4QeoMG/ KAJ%@(dztz A?;E살b͊$>g$>t11Y.H{I!Ҝo>ۀPd*$抗CCiõh Ontv.[2vhIվ|{L6%w:Z,دњ- '4.f8b$P{l,5G`:ONƿ hkel ́OdyN]8&9T5IVtR\.z0JW$tFr+Q7* ZȆZ{BEӊy͒?k(Hz2ld;eWH'P\c͐"j+ Zwȓ3" !luμ#`4~#dy`Hr["nvlyBn+XqSyZBYO1!R*MȱSAĢ0Kǭ>> s0&&X*Ye+.: 8}M&'Zs.6UHsވ}qoz}? bYN:h:v]=x.T1qޯ:x/P:E:UI-WJly a #t%?pE"-du<+UT{{mvPaHAf9)S*i|}ΆJʣE#ѷv2!:c*D&+BF&-.LwLvs4@JESDsIw69(io8 01$sGcM \4r(~FL1`FE+=є$/rv,+Yߢ̀ N, ClWn=2C7:0:z"t$6Q 1-@j"`8J3)r}\"ͬB!#\bh]#mMwOmzW꫻zCY룴z6h_zJzS~FĚ4*ѧ4-\H"T28~M\sAn7sɇӤ#6B?[peu.''ӼNWk|[֚n i=RZ! me07|-͑o)Lg@XmM/꟯^4JW/~'oyӣtvq/')+~>,K8p?*Ś4-h_Ԕ)& @gB ww 8^lՋ?԰9& ΢wvh[*&.z K)N./I7_/G4_ՂmT}kjroGM+fq>o0FT]UÃ|"l68.o[/sWS IZ*Ѥy&9QZSJl奷U沤^'3(ߔxsa={.=%ZZ!:2LXLۈ1$fMحh\q߀VmIWy#"'AdՕö 4S ?n]s\|2}9< esJae|fAyCV{{| þ9w#ߟ{w7#C&Ls&h Lj"&#dv^F(sAxW7  |l๐Yѡ>0:e%g)*{v鍜DG!jTJ>Z^{i8J8_|MۍÇ/G}U펽WLfÀWZy{eR7>>ξ{)K+Dw5+Y,zc;GsJqȵʉd倐4䋭.0aLH 4fBSfbqB-بR\IR3=HL1xH7rnGjgJ.¯w|܌vEU8#VH+|f7 >6_sIHИHqJ qmC()P8Koe1m۷wI9j{S7/"h[3x,+(|;7OH,{cq|(g^4d"{jU7FT蘯b~zG;h5VZtd^d#@9%m@ zAŝ*2cclICH \邰.Ez*cV2p5ƄLXM_N870ꍜہu. .mx5yv VZZ;=i]ԑYL,DЇ4P_/`d@Zx0B)^ $ܣHr/<X4%;1y($F"WB @"9POuJ.%%37x6yo1,F3epVz=W=#1B\E[0m/fgq qW9|9k+wpuQbHU!tJڼ~ɂ3WKȠ! ѩtEUSz{4-l'c}4 dE%v^Qe+oPZovewYI-=[*n]3v?ZZ-eEwMq,,jSLjXq]st'QԽӏݼ:OG}D},Vm3Q-\C 䠩w`$?eg5[&$#m:Y:7@L&8: 9P.J,)KceøζFo,\>#ya 2j&r*6:dB2*`c,q`8ƒp 0XfJyZ&JBqQ &@c\x N|nS^ҏsBxKT*q=blxgxXLAk&l"h9hb@MO )1E*(ItZdxa<8;6bwɎHn pXLD^o/Oᐢ%QP#sZ5֖!տڊW[@7?vgPBgb>+Coq[B^9;q~b^U|uQFy'[INq-p s{R?oK݀?~'S &z&\mjQf/-0O:I,Fn8/287招"䷋2y{T h M(\ Vݵif)kKԅU N.Vf-_:cZW1:] zսvOD,j7ڱFw3c)r2{B;IxJ¦MKo6b 3X{:q1+=tͼȒ};m enI5 ttr<{\n)YNk.,/F#//Fhe GϪwCߙ(]ד@VQqoBUSܻ]Kq؋{NŽ7jEI2իmo4K[~jRL@_ t#9la}eoO)wʃ=cʃ/=pO-o gj?ڒc:@!Ҵ(0 -(<UN 2ِ9&ceaQ1°[ TUsE {g6$jIeQ<LΎ{4тbǕf,4ts+{`ؖ ln9>#[ޭN>̒wy.j)D&Rw0)sI3^ :@ÃΙ$e 1SQ,V yt&F t Pt09}HM ~`G>XPhDP*l%l*c< r明2ʄ\)@L9G?69xA3*2Br[$b6%Ovbdκh4zETxi0.=+a M6N>U@|9ڟӃnz9C9#XIO\xGc w7 䋁VPe֍C@2z*mXiunL I;kCrsOO3B,qQ=sFh2AJUaW L}qkt'`CFp&8+N Fu`A1ʪ MI!T OYNI{ 4&6%z-w;uOLVg9]⳰Xj niQI% <̾lȃMI4>R  馂O ۧ! *v4mѹ>+߽yYѐ0gw>lb >i)୍1Ȑ=1 eb9˻t~9-2h]ϮrsD7Nf/?M}M~[/ŧh(;OO`[c~,#_|6 lQ_ڽqmnKu J u U7\n- ۿ]_XF}AY#}/ٌ9N=r=¸qtbUFQ[9mK `3䂡J>2p-ŭcr噼vAG 3-=} |y{y~X!7" O?h)BizH":tK: .f0ӆ@/m¼0(͠CX{izR`ܪf/wZ^V.*-r^n].WEFށ=+7v\'\!5yWEʔ}Ƣ\WE%L1jč 9 VU+UQv**pbpwz}´Q pO5U/⪟Z8U?•+=j߮k+U!e5"BT+R l**puF+ͪUkD-"qUT WLj+\o<ڈ،oPԙrߞ~h6-<[trF?~ˏϮ.F9ga8L@.Eyۭ>;_%n}p/~]:/CfO@6eyRLs!{ݓZ3`b8OP^jR 6o{G~0TOH/80N-lǮk1ڲ^h领Bow"pTX14N3$%Ȍ?N9|#;)f)fsoƣM(|dfCY/}[O2Y-dC}6SJjYΪƺBJ5}ޞݚcɻxF*ZZ(QUP*]BQkؗJ;--M-NJp%- j)ǎ+R,4xpV,4fJ[PT+ 䢩&jƎ+Ri]#,dELUOɕ\Z pU;̎]oNSU/JW㪟ZuTm2hzLڷ9T"ؘjpEr-Zp9㖏WR=q%7* 6 U+CMTl ª+T"WZpU轫rc:pEnT̜2v r0i@ݽݒ&=&L`LՂ֘cT&LjipG $WjJR1JXlJ MMODIpMU{h~j5WEW2"\` \Xd0v\f # 'o?`+Wׂ+RqUT puQz͊\*6`Q)+c&z }<_VFWposx*Ult5"qUTJ=q%\U"XjpU~8OLWE%purZcy̌SN2h721?{e1] ] ZctQ'L%7*7e7!NVtru5?چ >GNJZihX[fjEzS9jnAh-QJab\y,9OI%G8F4j;P{ kP*rWERipuRJp-* FJIԂVTPWZS4'eJTHL&22i OGO&"޻**7L:\Y-"b1h(\\ Zp츲J W/Wvz`'L+|Stk:(zL{qyWv Wt=7L~|?M5*ru5"WEW€"\` P '{V۱J؄c V@*a L[mjfrjr*IY z(]j5vJ'J!pEjpEr નl**dwBlE*U=KvEJ1%RN1JY/OzʕLZ-ǎ„#ĕa(UH\]PTZ;qe\V"ԃ+ւ@(zTlՋ߱ cV/āqO9R{/ટJR;+>j߮\r[H\{ScW@޿ WRzpErWqE*pu Nc uܛHJ11> s/ux8{~3i?t>Q銚jmP˃U, (Qv vt?"km#0.Qۮy"[=N-jmևɥaR]{*wOnv8a[>b9Cro@~Cu}{um}pgw4Nklϖd~j?wxĆZ(GtDFaV{e(aҼW:s%tN{8@ d|o6)pEl>֐?/94 ĕGV*`>qYU$e.g^[$= Q{ޞ-~r;w]/iQW<_WrS";m4{lRg-Ŗ106)*+2^{(fD{.:cl6hgQL$&Qgk*0Ky2KmUž9S-X(\sf0Τ1;1e[rZNex*idn%qu4~_0a ͔}h3%`M DLbGܨp}8ѺP{t o'2=dƐ& Kv>;fS)YcUdB{H QG;ZCw42lB}j'v Δ~YP4AZelb K<=h A@ka&h&5GbGx2HPTPyU5tJe $km JkkCb!$Q@yufx^[xُ z5y^#qk6>k5B2*8PPT*_,}` Ŝ1Ny9p(WJ*-j@F.'0 84y^!$$|L +QJjOhbP*Tf07< ^'䏫J+{X;N͐jPo]i?@6@0`=ePFeBEs <ɆI3qsي>R·"t/XYH80a혔?%0ֲ488)SbK@42>*@AMt& /-Y`{SO 6Y(J892z^p5(HGYgm*E{e̟h_uV* itrc*TMK]=}TQ@};ZE:+:ʫ#lon$$Ɨ*EkmDd,w[ e 5P!omeF41PQ4eL!rH7Cq+telRFs| (Q4vXN2_0viضwV5lɹOKAwbnPj *w([12@z>JHn4c]/p1-0g.T׺v;|ǀ61x߰rJM@2!,JC1@b~=…0y?)q8j͖^j:i]@C-Vc54&Cp$~n  V Kkb!.t B#R0`AApKA{=oŭ77wGwg/Ҷ3-1m.lhgU/0?}ݓ7V\۔g\lۋ.2^d3]޼ƚ[ۧrq䋧=kݝOemQ.qk0G w?٤xzFBPZ) ":9 M䀢/PCPBi,s@z-kHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@4GeCiחe(Z'-瀀Ys@grK49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@:|5tB9 n2mΧ`1049 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@r8v;Np:9 = J:49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 K=X6_}w-GMwӣ6F?ui7o3V-pĖq+9_/NWүv9 .Vah{: }t+tSoC.DW_]ONW I+'[I]p ] H/ӕI ;N2[K|a™WƸlAKp1i17UhӴt4}4Mɸs Е}xt%(W:C} SЕMvZ>},(S:CS /DW8n^fhڬ+AItut=A-n\f hNW3$*Kֹ#*p2{WK?t%(W:9YpX u6Xst%(U]WX/< tuBcCNt啮>wx2t%p3BW@l>u^ʛD1.DW&,CWׇUJв?u)]#]9Oi~5NN{~e9o;u(M˼7 ^?~JU^pHm:xw T+0bOBMgTZ;M%_v7ܸ!9gO}kAPn-@+=Еe>uΐ`}X>3]ƍn/? e$3+{t%:t%p1l8u.+]!]%} eJ ]-st%(Q:C޻h+>.CW7,cm>v$fCWSO&lKg$_uCyj :Hsޒ '8G+Kv:] ʘΐ++v-CW*t%h=u+]!]yV)Evl҅>.s4EnB4 !24-p_AM$R>C& ѕuJemS+|+f6'LރeJeK= !9U`˕n1pЕuy|{v29U Z؆uJen1sJP/{Wȍe_E"e "$gҍmM˒"v܃<־>^JrVɒR+U,H˺ 잮^]i5U,tg*Uqh9uB^!]E] WQtu{68#J?1x`J}Zapّ V逖P3t%zznӃ2 : =]Bj%uEsp ]Q©UFxOWU-.(%7-)ԂՀ>bԷ3be]qSFuWNe9rnnjeAt}R^!]iF peg2cxpJJ7_#]"]!`&c f3K9yu(tu{6<#Pq|c {$K0HK퇡<=]=(?Pu2\MBWVstQR++ BwJ]!`Cdg*펺h=u(u;`;CW; =]FJbӞT= ԅgFJE>NiB\dʡx5߮:W;/}(MELŠ` L h1Dn ' TE Ъ7Vo[ݮ!2jc\ 6?PWI_~m*i].媜_۷80?"r c.8'pQIK:\|ӿ!I}7WȂ?p,h2Uuo&L~fݺe)ac MI nޗ3_p=yWxk:.8G^! z_P_fq;>osy`^gbrq~NAch0P (` fӄxZ^]g,z?gȦvo'~Z3R` _Z*Eg-6DsQqygLD؋, [m_e$ʂ|ޯZqQNBUUj煼?jk펉/>cc;Z#Y jv4R6eʺ@uqwƃo+}?\VY\ `o. ?pagd;^|d3†vgtW׀?1KKÇfջ\5,kK)M~*hTģpدm}~2ck>A1(XJ#2e:v\ _vSBWYtڿOZ5Iqv5ە)?A20U>rfף2B < %ǻ8Mqݸf !.Yh5n|ӮQYzR͛E-Sm,IMw<2~L1ڃ5{xG5cyr~,h|o ]1y}$ZV;/)rjq3kA$8Cr/4(becU[4""s;Л`"I*ˤeXCCPHPD?-[z[i snUIW1!xqM Bqh&@9<hӖZkpVjȉhǓrtlW;xu!!;iM?N|ܜ~(p_?9x@{E4ul)(g`2ޣ8Թ˝vۗ0v8jS~;Iz2\N*vQl#7R*/cq8]3/K|4-X5A ˽՚W8hAtMoVqT@KZ(i0so\"Ci FMؑ**ڪ0bBeLXό$!\ex OFkj5SqQkpFY PHUYb=sVZ87{̛[ؙ?&w'^Ih%n]T1/*jA怈Nb^V>әW~{\K?&v>}~2NOwYeDƫbmպtzkJ;"ܧƪWWm_W;d+nqlsjXڲ'7h^+Ȉ[.  M*8%ЊB +R{ }5~rd4zA/GK j"5Z%峛pubp -k9zvv()9ӞSE11.DXPq*P#4Z"@'DL+seb/wlX4v;,HVsT<_N[v4n|MA)5:"x-XR" (]//WZ!p4!$)u8&r ]LFKa rgǃ[(E(Mx IDq88C#؀ , .b:O;3-.CO=3ϗ<Ύ`V51D ɭ%/ J^3O+̓?/<"#3GG= {Uۊyͧ6'hpyrp18wވ<7@f6ɣDVwci@7d]YlSW"^ LBEPk/L(1جK'7SkNi,* C۬64vk JrW{1=w{Yeqpiwafgmek5]Zd:=ow?.\$sd?ɂҘ7)RK "j\h\ZY:|ZPW@fc_͉T*o_`vtXO cL1UpZhLAApA^2y,|/` 9F[m&ק}< ,thyBI,:{ϓK<M)Dm<(ݭD:τFSU(%!!)0 vVmLd?%*BU7HB ޲E= +^yZ3/pr7?97E[/"v/˷YwN Sb4ւ1fb@@IE"9|)&hmD8e3@ )(ma=:$&A*(CHEv~;b^`M&1dc.X"#8  O$ E>q.ʀ eM,Ӟ/|МdY)P`/.$+J8ّ#5mKq-Tkm#Ev]Xr2`g,a70k"K^KY~Xݺ؎e;r9QU$"{_ISl7mRJGTc *qmҞ\ mLGx8HdsG.VV"ATQA6S4AiqyS;MIVVوgS,=lU֎Y,qXJyh!1m*S;WS9ͅ+' fSaDVa(^A&R)O2E{v%mD_>H\П0Zcqk݃G]tx~qޏia89Ր>4sעR4hf'ǣ\}a4,h@sFAN%-ÐeAKiۦsMZTk}qZEgC ,q>Zdw2,ͭcr%&oVНoSo[\ެt;^ ^fRE! s&2PіL&"je1t*Y]lU0gT>.Jl|nfcG®A5{킔MRuV?y Hgj7#2@N5nt*G[N-"$'Xu0N"Yfx@` {kO4T5lR\H\)1hPPA=Gkm{Cц\۰F-i撆&~uCiS|mZxJlL>7>zqLLc~.u?;Ǎڊ]}iFY5tqF(P(=Nضa|{ lDK} QD!P9' isB^Yɼa<1'/A*6SS*PrL)zP"2 qA%\;Ƌ篫sǘȍ*{k%nAw0y<Y4ZK>1Ec!XQMʁ1g&: ^yd1lSIe $ cH*[I+^~ɉCOBbHHk y1̆ʂZ5/e 2K<J5'5Mq`xvr)uϸ|77d!=7}8'w ^]SWw(!c,16sڴCVT "2AA+NiV@[W3jkr{fMF"NILEt)Z2IQs]Z+pTmX5c=[.BUY^>xeˤim@i'ql4ξq-eΜil(~!x[J&tܓ.sr 4 9R{!EQRP+%G2Y@iqsHck:jsq1EkWMvo~]}Hv9%|c""w0[m92_,5#ŐCJU}r$e!d 925HIX EeY Psa5rnׇ-_$Ÿ=lFԽF5Ms#s$iU` B+}a2#ffnB1mqR 9fCF"=Nf`HU&s*ړ^.nl+UQ׋^Y9ų#&SqVNDei[bքM``bB/n>Uᆅ@C zD H(l;$ُ(k|c$EҼ򿕪grKJ546FݠWW5~"=Ƕ(% ڢN%-JNlך:2N8 :%fEt^ϱ6{}07P[,Ni6f\ 'kު1ŌA1Qz ʕ ,x.iDPeI`/V0A/lAd)3&-sE5kƻ«c_ߧݟ޾{4{[ӽ5G^fۏoni%?܏S JbtG~HWjϴ]n  #1}oh724tF h7_[%U9.0KI36!.ɍ4opк, Ϲ~'.8p4CgwjC:VwbZ-7;:޻Qu Z,vÝ!n{ vsJa(99XQwK4{^\kEkwM{ph@ƗOq vs+q5#d-"o94~Iص#~uÈ(X;EfyKBx'2eG狉 g7Zr<%׍]fxwO )+bMNBy4 &yPIW*顚xÆ D޼.}y~{s.}uKqD30K^G? ޏ{>W?}h+j M-.Co=;U_y͸f%`v[n+@Rt8~7v6Q-]\E] .BdHD9-UJ2lت+ޕtGm%篋./KqoGV_ֵ1#e֐BE?1j)DsB[$ɰLj2+'{ ״6:@; ٦h+菶&c9HWd.)9!o83XJ7٪Fyx78mt)_YeOyLWm dt06 v~Mar ع߮];]VN7}{i*Ohrr~?;'ȇ_Jq G_q~{Qk/{jG%_x$-edJeeOj+g&xe֋y.;]_)1Af>([DZ'jBvon;&y9-]ڪYnk{~>jݺW~i#,jquХOwf'i|itO%= ٴ{9%'Go/ d‹{k>whThLL g\-> ٣ FUDY dH=F Ust+~n=b\vAa $EYbq>L3U)@+"f\!LL_c^[tH̕-\\$VS%Ě(RtYEnGU5X5˘tc%w&bAWzp%_Cږ6'.fCl z";I]ľgY{mUZ-wыG^U`\qy'Хݻyth2n'f܁JhU mgcS[7ڃ2jou{<ώ['ͼ"c0"7 #iBGnLߏG_z ?Ήۗmeo7!K; q;uŮ_ GG_pJr,7utuEԂۮ +K?EDW&;2-bKNXhk qzo 2Z?2D 55&ȣrq*WmOp;xY>!L OF7rœͅZPiR7AvAZqq29_9s]{wj.ZV𷎧.\*oNWs3GWL!JH [aEbK&^\r0"gA?{ǍA,ؖX|s6Ab?EIgihFWy5##JE-=,z "eJZ8+hye ?Hǟ/d< ts +)y<ɫGhMZ,(u,ʆ.82].B`K$i^(IQ,At>X698yV$!d+߫e6iPR)@@8Ģ̓jH:l4،T+z承5&_*zp0dcEn!<56.@ GИLƠPb U?׈:}ގbz JYf4c"`rAipb4^$s7F &4ET"kGHs 1"hM YdtTJ h]1lF=nU[CR|[[(*e5kb"%2+U.Eh"L5T¡mLdWo_LHG EyϖZ49:'|j,ȹ[PiFiS?C_Cχ6ތO6~ٛҢVƇ]ɚr&Zowi{o}OdګKIn>mk\Q)Y,xNxА eq.GY5oOް@V3ZJ%FQ;X3'YP Fe+8 x+VR5c3rQVi qƦ4օfЅ չM*2]Mbz[IWTXpttp2klJa"ǬPdSllmvY .*!8UѼ ,TRɪ6uFWXKḪm'joіDE$[:ZsƎ'|݌;6v}Z}hk{BĒok`pX|ȆmLE4X1aTF@̀f !hS8Xd&$KEI`+-Bz1ȹ[F}(zz*шc[44ox]k$ w)o@"  $ٞSpցM5"z:Ͷ1 ӲT=sL^% NlIKTp5Ԉȹ[#~: ~Ց⤛ 6:qɦzq_H88^ $P2=fiDf(K{#rl' #$tmzq[a3H^h7,*lyd&){%/# n~lȓR)8-8-wovnpS%Bp dL ]shn-o-%(%`h5Z-@ID,NhQ.66l"ԦC_t+ּ+O g|Q4% (ћ" ϡ)RpHzt"%sw7!ضâjKQ9JK+΃x&w%T `i[Aܗ)ӘfGh^@֎Y kTY^k!k|TUle:0=WrSbKIdOhWR2{8Ρ"{L6.A FIFy-kDm(Ib/YHƩ ƠRʪZ® @KcjQ~}ꚉ!}|ǐ{trtQY./o2Z\ }{nfoq7Fs?Nk|) Wt4Q<;<3݈-qc;8z<!#$~(vFNL8Lwoo\UFI :E?ֻ-6YEF'-HJaN΄g%Ol :7K=$;`o߹6dzV i;ϓޢy 7K7[%xeզCtPB5bed~DDme7qH)3*yyŘ\Ա6OfN E0+)QR*dh (SNF0d\M0!e%y/ $^(#3cfܣVܼxrtη4Mr@ ]uȟ (#ImKD9 (p Ѩq2yHzkv =B `=bEU,%o΄\F/af(`.g+uW7&f9$LpIL{LjһBAI9)X5˘ H r"YCMαIKR@%m";YlH}Y{} [^Πx;=;P+[0nnAw}>oo>K3m4zk(vN{;&w6jU243x-Zhm} -O(笈>:, ?%+6" ̢`JCې2ҰZӵ6CQcPTv$ g'Ǥ9D100:=|Erb xxzۜ' !WЕ܅ԥ%,xبZYWH[dXJ4AbP1=Hm$SX1TkW&5D]S8'IVDJȹ[W?L:#py׫,, ڦՉYoW 3YZ'el O )=\+p}7vv]/!4e)"%z97+,M7K_L2j%PbFJB^;{5[eq-1܈^7̵żF?>7S_k1E-fu;ӳd|x6YQm^;{;hevwF̍%;?n4WoP~f䘦iؠ>:17p|F%,tɩ NĮWlJ8nq잯|;O~|&OӃٸWFsqjQ]߭O_<6x{O{벙GpuMf,r(S8^1׽ʷ|aqF.9/qoj}e_ʶ~|Ǩ 1V傲V\)Py&v*Hhl-x&y{x`^ h3y\@h,?茏]L1Q.ҺtC6r `Ѭh6odFaM|x5˛ei$f ?t0h Y6 .AV^.m{zC Ay٣@jOrZB=YS6L}lӻ9,ːE*j@Rq@79wO /W<3Oǥ2[ ne`vs3#T(ܣ|ǼK4+Jbg6ZyV )j#ZHtlb(9I݀n9@>+&f(*m^ڗVR\[U^er9EwHZ}:h+;o"Z{=eݫ?O>הr-8q5}K\ף] FKN4{owKus -beVycΛuI`?GK :]:vD4P&cxfJ/kb+ɠL8k:"f.`tQ׊(DTjnrfwzߓ|q|tg8>ީvkȬ׻+׷͗2?@gzaV)n7}zۿ'̉" ywT0g|,"(e*vstNXn9aضlI-Fz d6Rɂ> d$U&oL dQZ$dF4%H}bG LR0f1 o*ȹGu(:/FuC.ZKoa$}Lk8_O߆-*rW ^1^}&Jds(+_̡Jv~(Ryp(JAcJb_] yp^wZGOM UZ VP}kK -eªמe "CTU2ޕ.q$W/{W-V]`lʞzkZ8(Y'۬F7 @ȶC.TgUfV^t]›w=T}SZ?ۘȵ9J[^۬:Q_#ǧc[Wϭitt:kruf!,eYͻ]7wz7ZY٠煖CN_tk>M _7=oc?G{r`QY D$\'@@ ,ko5>/>W3okB`zҴ[/_2/fIj}zlz,\x<MA)f[QGPx-XR" (y/Nvz`x)v 1ITb2X'KrfaNGćULu55`R dNF΁r x d9XO;5#j~$?l` ^Z<2v`ese5|ns.[&x<%!R O^!log`EU,YzxɇCO+ܐR J0Y+DVJƒ% IJpfxD} ^쑧Wy_܊ #w蹞Z H=H[ (8D9>*Pp =wY`beѢg7?0tQ.6.>ّg~gUdRrƝFP^GQ.*»f cj0poղ60 xFf4QVI&HJ` (C$;Kh"7c%|G-'Yf3Xs9keH+M~CL*k *HRɊ\oa+P^|v1^?67hqyYEdqV_7nKaRzYCr@{D9Ho'90,Lĭ[v&:m#F3i5R+3Tl&ib>2f }8&4&|g:rV$hi,:͔NvE##L7o[ 3BޑRMbM}wU0:SS?x o' !N;;"|\w^*9[SR| Q2h;g{.'N4 gd(ۜjN-Vb=1J{|Z yn?'i I]gʛkj5(Z&fBӇ=P+rb=˭;H p=w׺IcY (eYg&l?Sn-qBYQ_7ixh6g"vm 5Gi8]C˨zrL s. FׄP/YP&:E5Y]76HQ~NYI<ޞщYz|`7tt E G^.'oO` -1FHL6:A9r3B-y ^Dor6 ֩:}(+ێךfƒlh8^Jqb{ŎƒǪ#4(q}MX)k)j ;:m{PNcL1UrZjLIAp}[ѷɧͤ%-]nOĦo%&Pr(jxBI,:{ϓK<M)Dmk [lx/ ݫ,IK oJѠL//|?-lj3O6MV[Ry ''SGOvM©ď;~D0]4/~6'eF3s7*q|QQ&sz-["ggNΦɫZ*+~?E<\]>[šueԄ<ٴj,lnYē†&l\gŏӏoW6bYyAPzߙ܌) 7?PL#*^'yG|^u: -"<*j9?Ngݛ9fOxy=ɇݸWqTЁ&(a7Z[Jg-I¿qaz#aTJOcIi0sojTZ#&DJTcBeLXό$!JʘA&$SZ?NRz\gܯFgеiV³{mOt%Շ`=&0;MRX*nl۵ !Ka)^-QXD8"azh#[؞B4{QZer#L J.f39mUؒI|Q$"m |XuG~5y ;ȹ zy*H ױH5doʊgśVc^L P ꌰ WWh86E,;D(z&kP{q ";U$XQy4\fv5S+߮f*oW_*`fry5-X-{1͎dqyK kRqvZPq[ZqUlq79+p'υ)/Gyqy=?`[jFR U6˪bmnyj} !G~"B{bJ')yNzU\*o,&;-2c"ț`l9!#e3_ZE\(8Ng g&vp oPޔ{sVmʜ$R2\ɭ!ٜjSߦ\pHT?pTC@b[*U&WE@2pJJ0\IIGW992}pWJoEcGW\Ύ2:\e*^\i ?&k~4p5XJKC_TR•Iu5N:jqmG-aEI p^K^dgi+'\TrJX1@UAakh7{;ajKw L}q_꺏e⛷.x5Yv|۳N&#%^iwjIrS#%sT 4L%U97fvqY9~:g0EU׺qټ:y p.q_|g'?y&qpq*UPyi9Ψ60 ydFeQn!T`J0k M:2ȍX %v@DΓm6R-|=/d9V_Uܛ&=3M [;Tp2.Ƴn6f~דmވ%zp;nR| ~$^B4xi%F:YKCI*_ _jkE 7>1w`Ϩ=!;\,m./H(uy^T?UR=F%c-:tN;z˚MP!Y(̔\KsSa Kk…ǣlg{4G Q~շ\ 3?a1Q9s8, ~GzҠmPZWN?@0 pd  <r`(F] vpi7t`:#x~Fj-~ŒCK%V(6\=ɍ״npd'8)ƳQ//Sle >Hݢ~m-jp ,hklItvjEC.;5s@*\N23> FxpcNMyI7A._O~|5;k>x2Xs>y4<=]s ƓW;b;dmyf#9ζdۨms\˛7i!#e`IK_M`Em &8~S!ԶN ǃ08#vʿ???nNǻ?}O;hf)Žm$觓0O ~CDt14]]κYƵNe;#^mYr˭HW? d^AC],a\E] .Bd/3脦oS%5[|<*r1]v|PGMC V)aݾj[.Fϛ]M6R@t sVo\$m#WZd<kń ADmv6,T^UKϗ}t8@=2fyc9ml&i(#ZX2qjdxЩ86Ydə4ͮ.Ⱦg'n^;E >ds~SNv>esܚ֤Kz Oe*,Ko(flB}3&Z {_0Pd̾y W^EÖϻ;4eeBv+}"TTy-DaHFvYV0Y"Z4 e<B6j`FBjRFhBrL&!rzF4HtJC6jgo.A0\iT}.a.θ(0IzQeoMG/c"xK{ʢ0F.%e$E살3 HQڬh1쀶]Efٹly.y+;p$}:eL6%w:bNWxt%^hMږ6'4.f8b$P2G2v'}<+7]k hkyhԟ^tyzavtƔع;܅3ڇ}K zx&w(JNq+4*V"m= s0&n,LzY,k]ɿq6Tnrr52(Hr`ȹ_7⬯zd2mFٛoڲlZebSԖ,Y,K ,w)dKrJj%<[({uSQҋ/tj\o8 ~,J[= |">o7.3]ݳxʥOn8`^;zh({a'O/1rޭGkȜ"*1rD̖+@0J5CEP[b|Yqvsz~Sjfi涙45-m7a^Ʉh4܄Y*i|}e׆JYɄ+댩|2HI Eyӣ,A-1>RĔ :u}U.cy͔r$iM BV0Eәc* 3H"[Ia"kurWڝ^{ҍ< 4%|_]lnz4o Jt j[~_^&*|ds,3xZ%0̂򜇬z0L_}D?EKFL ,,L<['ELF켌PB',bhu6;n#@42&Ys!>C}`F9tJRT"w|9;6`T,>|n^{I0qzo_\ԇ/?]\r7̮^5dz^9ȕu_?ZaT_ٯ_>_ʚ^>܈xp4gXQK#*tWy^1%,h{KMʗ/ƫsv+H|6 OuR9"36&bdN̷BJBHu),/61+D8 2^ϵLdޱ99>ٕS:jJyqy0 [₧goVe+En{Ku,5`{scP~LW^QAWX<^/SsL~{~AQ&? ,Rr@@KCS+R csSJ(;E:%fR༷gDwV8+Ҟ`gAeVsKñb_\j?dxk{|ތs,W{U}Dlt»w=}0K<ۙȭMINLpmL /!:u=Һ=-xzO6ۡh路f!epn[;=yݡ絖$kotjλdwKr6οzOM"Ug8[K\Lt]\~]yIc5qKR\%YzlsRuoO>פ%~GV~:"Sbdhsh*R(d {K {Ǟpc!? &o2)Z=iё5FgP8hB14Y00g 5@)}q cś4H-mGuϷMʡէec ϱRM3k,p֩bKBSfd[Dh\p $w/A61 ]tF"|ج;J?N ɇ<]ע;v`ˤ[3XV<,X̠5644) &'BAFuJD(y+nrv{ӪzziΔ@ B0;'e 6N)!rd`,ZpcРzӉ)p݅Q[EzqʚK=wB"S(ܣ.$`]N&Nt&3@ PA/Iʱ }ބ-2+80tQE][vC?݉/uċ7vA\! 8+J' `DQ>w#6x[E wQp}6 kvZ d$Rд'+Aq/@%zZG̅)un` p[ڟ#y~O{]L]ʈC[ (Ġj+-B 8`•-!/ N[+ௗ"5Dq=؂fqI izr?Wy͞?jJ_02)ok$WLvR3)0܀}MnbLwm#)wyVR{MuG{?ݺ\*kM )٧u )R%ZE$3ݍ^9vf}M#h~d49n-u:c  oOξm͜_JַL3S%?NjsxRޚ?yq{7\bh\i2`6rN21_L÷_;_Meóxvb3hp/D;8p ,Ʌfγj@O5юR׫MxhW-ûβܯN>㖟 7=fCghJbTM|;zXMryZR-eY;RꞩLYM O'p) hދ䒈'R 4{8˖X<6F  A|ag!Gp4\JS0$&j1q6^i<le-$gG~Z?\~ތKKV?#zL)Ġ5n-58攀$qd"SLT )آ "F[ Qڑ{K! $JC4qi!^z\DyR.6)y" R%gܘg H.7KtF$ЧQQBYCѾvֶJe23ǭh,)-2?x$(0klaj/ J̿"_^ ^Oǣk0O,6A S.%” {8]p.0D9.u=GYpTT`˨VyN-dY-:KhCp\&r蔷ΜAw[/\S߻|#3x`ް̊ž1O1^-gx`O)!|uXDnTy TePUhFI2E8ɞٓum)n%Ky;d4,> >*H] r'u΄hsF].RdI-ɇY^]G18~n ̈́C<4_h!HLӔP},X#qIs ,( &ؐx)ևy߆@fQ}t3%ۚn]we!}07$Wxp* %r/a"QZ%MR%QV@RyyNiwM睶;YL JiT)nR^Yid*,hN9a˥si=76"Ht$AUxQ!=#:j'L+Mx*m~ZA6 0&$UN;(N1M31d -Ϻp1Gͯʆ걃H7/V,3xY X%M$TL1(tVs:OH^ mV:mJ"f2Rb\ *זX4N'N!hL([O@M ] "RNgHc3;!5fb궠fhy4P4qHJ.&" YKDII+)3$D_QGB HGkI-GPI4 @O& +j1qWXЬ&hflhke[͓-)%~&;<\֥k80˷8'ڐYO[OyoLuC|jyts<.02Q\OQ4e:8DXa,2*)i3C@BOAꐌchxY[MR8#c9[b,7^=Ȗ$˩?L~s@eїGlSDqˆhsnO{ΖBH$*P~< Y ,&W+̄0 QDҏ:Js?b0,mAbc[VjwCӀQ `E$ʠcqJюA[(&,j=d @/kBn"C rIѩxXLvxۃG[Qu!]3^DG9HOZA-1'oL0h))ۋ%DiqS6kG0%ҡtW Lt!L #'B{WY\pgW(F \I 1WYZ&WYJ:zp$#7pJ1,%bvp` ^#d,js#\\RU&v*WY|p%a`to;)fbLĝb& 7<:s:/O"$scyaR%hn+cAZId'G^\ *cQu)յ}} fR\EF2-&pECl%m)\R|Cjd``\)<7x6aF2n]u㉝~u4Rvdڛ%Hӭy5-bM&_~?ش%Oo&x3,Y)yaK@X}[5+cP-*D4TezJ4vɝF݌[ɷS4"4Z;=Sͩ5[DiRÙ`F_RȉKQ4(e "Qnb$8ҹ-дE g^Z $g%撽bv3bǛdgksmYmN{5&qcihс Ѥ]L4qG ؤh#$΁B"@ktb\ +זm<݃dQ iD%O^hRaE#7юE!*.hnpzB=xSvk$Z1AjOޣ"S7%@8 "$k"35\Y˚ $<{Da`Rn\Ηa;1foU{!(sg0+ ޕ6#)m#ޚv aQ^P\'}ò-ɲd; (JI1#dq'̠&S?qj$S5.8=wIOi2Wifvm[ xVAR"5,:|^#\ Y :fj@/Fr:!m٧Z%Ҷh_۟j~:>[?mϼO\;㏷03ZLDokʯ텟?\>  c\sEr98۱]2 Gÿ}J|xbp/ ĺ;MϺW ;]Zkol/vk`hnĖbmֿ󷟇/I;15"vdFLO͉3Ɏ3z.Ot$DG&ʑk,ѹdC#feJ*ȑܑ[흊ef'Yޅ#O^BAp1pBd x cWyd/@(v*؈,`eLR`Tb,٢ь^]. wn [FK*dљa/EFht Ȁ8%M:tz{Ar<+QhVQby:BWMglЫ`x2iR.V *oYPrҳ&k+޳QW*'*&IGMvruun<}i3]J\!_f:CWW/x.ٽ627^e594]ަI `N6x[֟^_./gvo3FN; aڧ@%KTtd!\t(,ZeDF 4,hA95>LE'-R T&􉙬PvlL-I8#"³yfqv=| ۇmvw*>\L.fфpMST|,)dj<1 )$"fb= Jz[Qzfd^H8TD%䚎'*_2I2jEUug~:K}AlUno<69A_ips{¼Nt6D{p,H@ʹR.qBt9U *ku*jɌT{)X2M.jMʧϠS-(͂>^|)́(SJu86)S4Ƈ6dccMhG&@&dOM`T=Yhv9`ډl jUGFp@eiTJn? ZH tD  BaK2ˤ)%( D{)< BRM(Yr>!o2eEǝ=`Y؎r,p\ H*has|9SfzUl^ΐZ8`-c<:Ҁ·<<iyi1"Y3rXDCh#01D*)*#g 4lӻ3ŖZ#x`@{j;Uvҫ, _?.m%OjnV׋<\D߇a܃Div+_}SAdi }rOkhR؞{)NXy<6nP',ݻxK`(h*0NN)M` iE*M )L IQȠI!I! o ȢQ"'Kn"T/wl~>.jż<,](~SV;9M}M^ڍ#_X.bf޻fʽALODA:I?!ٺ %F(ܱv&闪`0Dd#O 8p,"I * G- jA+Ø F(@XƟk VEx mB>\̾JՇAhJNK] sSD/”\$025S eDG3@HBȜJr k)fO̗Lt)o;^ r/cE*ZU.ĒXq CAvc RB(&A3B ŲgSL6A5 [xtV'ۤl!%xdMB9a ޴qZף{9*o$}-gM0w ÜqP,o~0:E5dx xd,Z畐z{ܱ[v,:/;X_BsQfXo%8S:K"%c o1y*fa}ȵvv5zY[ltr΀p6s[7jQR0ᐜ+K$5ShjӉ9 ~v'PıQ7<}b)K:ݓ.'d` Bs3"&gd )*H :M^ɨL6Js)hzv }>{q?vWcVCA6_?@\ 0 7I7A+j2CW1ּtXbcJ2(I9ҖtFB%AVI>Xr@C]MFSۓtP)E$jA %q`&H9E Qul{LmO}2Ś /k/M}1-S [PηEKGMԧfcu9:ϸ)SQKRc,hR ޡ `2P_,z{ k&͞>;[=헛<+|a:+moo\3i%XMr m T# 6سոR+x,,Y Rk4VMc<=YicPH9bL0Q(D},U.]yBo!}Lxf^?) ֘x_vX @GRzxSkI}SN>5;LͧK#;>Հ뀕It`,h%LYٞ2#'ܺ9dCj4 dd@H FI)G$uQB̝V3lMQ^%HRPdH ]ߗ@ M.c;gYUpw3* "0Rqvnƣ߹E-}M|Taov>l޾MgX*pHg׶rh o]^?)c9] !zhQ nn7w9<<Gp?wix|Hwz^N8ƣцW7'}m&Š{^C <}22h\$rA$D %)'U J# C X;gMy~0[XG絻5mKT͕LūULD лKj4? ~p\f7~4-/K&a|iݾy.`$iuP9ukUxjr^oE7:XU_e3*O[o^p*LMAj}0t~L_K Bh +LvPE &to0W'x2tre?<Ηᯯc矱Jt9 g'xr(?Ӳʗl4tdYrl;?/Atr :i8Uupp {q6Q}*O'\p}3ۋ3,is!z-T?>l>Rr嘱g 5uuZTG?<_{TA'_6A0U*'zQ_mxܿ:;zuT1yWRX)^Lp2/}1i6#]6/׷ZLsOB?ҕ[j|vtx{mSm[MsnrNx8W*%ov1 DmNnt*\ӕwvÆ >0:fv!k}3(wE%&ucwcl{NK';hb_9t5f3`el<} h_ݞ4g!}&$9`Qţ6pobrb:;Eœ:x#F,UF-ߞPCx-8n˜Ᏻ2%TvK&`k*RQ" SpI{!=ϋvmהlfXY[va\x rI}#E?KZb1I*VJP1S]*5,q(+8ь8eN~h-[}P=T2ފeidL_SH"pCʈTGbU1jd|x6/+]:|#˯ȎHXZ(2'9< LAE%.f֘KkuUE}GKv}$U|8Y<-;p:F*26`lȔY\\tE:Y/̬i}o:5d&PˆgĞoxP9#gLj~{.W'Ha:SЖ<)? O"Ϟ`&1Mo{6R<80ߜ< 5+ً3?,U{Jfi DdOg:p ҳɰ/|rR;rt8jO 'WR"^w<^_F^7*3[k}<Ԫv`kأ^h讏BW{lٛAAvlOWt=(!+thm;]!J-z:Bb)N;DWe+thE+&9#+|Ϯl{C(\cT )=n ڧRI%+A;CWvFZ=M!M ie+,9 ] #VSvBtut%RwCtp)W]+@ˤm;]Ijc+Pvם+Ig+@kX Pڍe۞`0Y%l5%TwhT#+7CtR;ռ3G2C|iw+ա]OPw3tp ]ZH{H.iW+BwxPd#+Ι |@!?UVo=Xm-ՀoH|*&6r9{zmI[I[A'v#lв_/omx μAM/z33_R K@kkb;H)TeNG123 낃ߞ|jO8NoA>=] TUײJVU"{ƀq7nm]k'5m ;U?Wcy}yM_SWlH+*l}r7lQE@#W(oosͲeSp2?BE79rS]ZE*bztf;qJ xTCocdL2<:I#Ty+'>Q JfeU:.1\[\֐PIYKgh|tkDէOՀ΋$+) `Ѭd['9p%:"2"hH6eyAM$Z)4!us"0ɤvG"gCHE$*}mM%PIIɗ0ѠZdBBL"FfZhc.ORN@h50#ThuJ.%Pfڔ"1@gX*D ΔH]rgW3-4r ۷ШRRW:e2܃&%V&*\"$[ cDv D%a0C7fi3ˠ, =ht-H26Q:j W^)n2[>R us%ʌysAX 1}!0w2Ƈ Jrs6:#AvӪC@0$fg $2APϘ;)'Eۤ(VKvEvս]h5;[_:GBF@=.#6h o?|J{E!ո^똑(s.lbqMOb|jŇhH/ZƩJj.ވVRmu ;Q椄,'pߨ1L%J1)D$6)hK 5ƯZ)TN^D6BYZ壝2Xrx Vj/ 2$pZjSerzE5zҾXt):Q} ` ]&|J'{X$vHI1!Ku4Xtd](tGlOM4I_KͲoG L3R/3*`5. [ݫ N6p NkN8v/up;8V(4 I&jeWi(qh yֲtXRՅ`Qe qah Yes]Qbid2X( er hM$` +,ӉPuRM%4` AOuL+j+Q;.jЛAޠB+X0n`Ɛ)u0SK+@Lh=4v#]7cEfJW5gxf#n0vD `L ӌ/%RBbYLu>A% `- `I. p!A{ d*3[!((s4e`j L:FxwD=@w/:I <5+mu2Z1m+9 ]Vkd5{RRR}JuօQyիf@!1&伖h RDāj`hV."2Hb0۽@VQ= T+=sIƉ tfp{Ҡ\!mQKQE;fD4' G1͋B$%0!M@߻!wn0¬,.Wv1ۚt>'+ywkW e]@!PI| xK K8 <Yd :AJrhrxL Ρ'`2 3'5'PiD.iՠ*|ڄLkHtqŏ.O&!z*T`YeTbM=A%dB>hǯAjz1΁6DD-1 0 zvP P Q{GH]f}JPD;뒄 X:!t kT([|`c"m!I9Mh3{Nw$= ̓tH)YC Ǯx6 3)Hd(nfWbeCU1my㠈BõBy0) !.|H$,Z 'J s nw+#liPF+ h2+Pjj5]ZzVEu 2In%BGi/^q6.a*`V-6mz̃vwwq%. >Kv5_6@\T QCAlJ''f= ]ZHlf1j0(z֚Bq$ZVO ]ȘrHlҌؓU톢DluPmkCQBD{=iQ˕,]t*22CLQ%fti SSC꛶ƮXBvs#V&⤩Mk:)PFA}?M bZ0p0) Hh X(# Îw=uS J?cȪTy1m[j7i]PYqFO -4֤] 6)j`%HI22Kfj)څZ:-秽z*DEBMVk֫DYcT9$Bi1  '-ZSrlti =A+Bbx~)h G5>T{*:A,iJ C\1Fnf5qC[9>3h=cr`ۭdh;Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[sڭ dA٭Vp~*ӄgo`V_ fۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vln.ƫxF+zCw0l] .pT(qbpF~QݓaV{a{Zd((x0pht(Jp5U~2\!UJ~&+D?w6<++4SI "`Tpp]<BEW sKb3MA}~6_i<](-1w//jFf3c?'i[>1g)D=3Y\ٶa>/q\70sÜ ]Tlڼp/oӣU.9~H7bQ$!emO[Sizb*|WB}:_K%2VKeYry ҇i=ٟkcFzľc_ /?yR ^̂vz?Z=0jR^]H?~Væ; 4@jVPm['6<ǹCXf&&=K]ԙh 7^LeY]m L?[?[޼9ÝC\b{߽Een-A.O2ğױ1~7I|T<&ۧk#"pXnK]m~8{۝lyzrz7^`+F$r˘m"{aE[G썩dewo'`we=-dogb7iE4Ϯnخ8Gya?M6n|o-|swv|'VW{憜6R)ח12L|>ç@ggm|~odh]fZz zKRRNFDgBuRE9ު܄2MDC껳 .)ls?iNa֛P' Bm?ӝl+y~.fWOAV8[s_.O^ks(͕x`NyquA Aӷm9=5z{n,Wש}nyI@QŽo}״VP=$v)=m3\U/ 1X D ";0Ls?2NU&2Xh'BXYp]!m$j݇]]zf/]]?J1PuO e-mUREJJ\ɒvu`H%YiE/!&PB;+)01bOguOb>Ԟ,;ڎQC {k|'kFbE۞#TzNxݤxJ f( YuDq.0 /亥Bh>ppA/ߵym'ʏ炈79M[m&EK $b/Bif֕k;/rBD,7 Sٶӽɲ䱸&ȸxW {HiP!EUU]:(-;DBmqssɲQxI0>?v~NCnA<?EUwgÇ>ZsaN1Vmݚ]uGeE]ko9+6X,hq 0L/0e2'%$'q{YddIqhK\އr$I:\GHgL;aOq 㸦{Ҿ80Ub^ ?Q'C5K9ӈ-9ђk֩m:%v;4Y١OƲg*ߓ#B,P኷zD k#ۻ<9FnF8+ȹ|tX 0Gw} YJ~dIq\z'˟;޽_X#nHpٔr/eو[k&#')ſylfˇ̞6dԸ^@\r/Y|gzmHF xD#l0 )24p-Y4 )d΀b6r^Ho[g3iFM?<,m'Dsٟ.E~Ɗ܏{XVMoinp&T^]Q]\wEQOdp&h飗RFSLk:cTյF Jkt+tVvi͡7J FCC2r,%N̔\Ǵ4dm\@%몚d0;5&\b6֕<puރz5x8q1{ʩ"¶[c'F)SE~^ko7MhZlPuG̅k.D~r҂:cV2yǙ&`T$HiDTިϼM x>7^,9 $d`u 1M&'JWpm;&gx4Nr ?;Vvh<T:TٻAB8},$y`RyAI1'?;&y&TeG{WS2DkrݻTt9~)A܏>gq{Wg9.QfЅSK<:KU{q\jӉzq.1 _q.lcGkN!*qJK^ 2Ra)X "'m;əOAR;P9D vHăy_D~g["$4$&&FE H-ͩ%a'P214e|[)Ug͙RLhRm̄*>%j:k}ig:FQyFIZVjtx<˦11u,hFFY>e&#s0)q5DCM~l$h(wAtֆl,`.% !)! Рx scm\uT ;7>çJ9TfPL|:t1ŞZi.?ޝ6D8kZ_ {w9\0 8u5؝]Jav\TPa޵Qω뒏`icoOҽxQ៕o_?O_fOQ`) su]]RJ|po@iSKu-\mS3 ,Ohc Gb9Ň@moM͝tsN6W͕^qQ) K>B~z+%`au+BmTG]YP?5N?vOO?ĸOxG` M"藋 `^ ݛ6YxҴnߡ]j}|ҋ#"0?r$ޯ~?ǵ,:uuQ.Z&2~\e^}e~[{aZE%՟K%5]|6V[x4%#UM ?D+7U4,k:G P»X(6*yɒ2BG`P,=LD8k+f[VST8x:tEKrU"y,n`5eTNT`6!iNg&w9g: r:}^rcx/:=wn_UQvUJaRb] paPVWWiBE۰^!DiX3ZS aR a_iDk#@ !J`,PXje%$ 5Q!-r%@ymtdV0zʘ[\=~NE0S\Q`CiЛ`r@~! \zAvAヾσA)yǷ܌ZN5o+&Oi iqLQ"}PĭGs)ItN|ڃCxxf?&$Fz@Zr?6bTHb jޒeI7˽N#?I|:Ξӥs.)Қ I i`ub*qz3E4FIa"ǎC θְ5I+9y9ַE bd-3N K̩-Wm=9#W:Qb,`N) g!'3e ů)jVo=;tt$tղT=;KOs$5qĈHTVi)Lvq!.܋$_X߀$ ,)I62+G ݄Ĵr2:l4N2 ?.!rJU{o=Yy["ln}UX`!TU4^/qI=PQZgЀ+",ؠ ȸEtYOB[Y'Dڜ8'/3ܔé uwHF9{!dtwӆfF-Yd8A#Z p'd1e9_:Lv[ c^zNcޜdQA#A*+T+/r($ ;/Y;(u/]g8C#N('u3` ,0gr$_2@|s"ZwnǍ]Yy ~gZis\gі:@2)[,)z$\N{S\ETwkꬠv8STފ7)gL KXMApcG=sB3*Pc59.9;ÕԋlVP>PNI+gOY1D6NM5)d\邰.Edf`*CV2pg2b5qqT䶁QklFw!5vicg4+kx_]Q9 jDY" [9L&ô4q"2u0@q|9oϫ^>9?P)eY.Ek`yTJh!@i\;GŸ:%ja6 2x1ZȑJeۚ8;m3y4mmjZzhE)}Gi0 >>W1㎀gxǓW1ǣ(Nstn%~I3bgOF.OfHD<)^bep"${#ˊ/b}?dop>]z)eo95#BP,%(ie,e~pGD_RNZ/Mp1ebÖu@'F/T9PoQ22 K/."M_4j'+Njشm˴27jHR6nTw fgh|_CC$ݖe=W[npĝ;Vtlg!76.!e>굹RgwBXzKW/I]zs8Jelͪ™t^}R }Vj^/8FfO7Zy9nd-E\|9y7-ʼz,LոNtcM;ಮKgVʞMb"sIjN=1FK&cp0˩ˋU:p+^}o*#CYG>-!Ee᡻x!ɅL/O)8:B5vX!:yht֡13d(?{6a>ô~~ٺ(Vq߯d6eYl)fl~]UIRBR`hc?hF{#m\v6&g-6';~Xz>_98n D&GSL1sB 4(h(ʨH )D (Z]{Ae|H;`x)qR׺ :vɠ\LV[ p*=~(Ĥ)#]T+P#\#^s!OJrW]Ca+ &hS+՚ps5&! J % 뀧O痡F{Cdg>`2xC,1R P%AK"> ^S!O+ȓuü/n #w왑蹞K)QRxDbNhӀN-xP `a\/fBю[z ̺.Z4GC"QBQۚ=L#q{"[u]M;q\EF2Vi0"I(1o" Q.w#6l`0KMkc o`8_˳7Uyp$8qjV9n]R .F13Ţaf#NaPe8yE"yfeq4C[w _+?ޫ W?#"x.ߖd.A+DPPEG~h\&^'"`;D[?}3ga+GOL^9'Bw0W# rV%'?M&>[\Rgf~6W${|`2nD#o2O)$N~}&?,uWǣxŸa#_vmatón Ox8E/_Qxsș _B^mwWz)oKCnF[UL7O*-蛕wZ= q:_^?Է>%Rr嘱\v巯 uu" ڼTGjQ_SaU]q8 JEsRj3{QWq_/*\u܎B[m{5ʛy骝*ǖժp~xQSV; bROWvZ^Ow.z/w8%oi _ɟJ[nȚ*.Կ7 Qmg MN>o޺=?bLOvG:p0@[ nv ۄaV-:2C2qţN=цjYMaZ)g74) EI&'7VWk-bN5)%-.ZCۚ3ۚj߭(g95YC M8V\>^ W!Q FB8Ǽ.X^ W 3r=-+Mȼ^|>ݛ_~kk Z}~6Ok]xy_`xd">1GP9 ,o/pK)PB/t2L-ؠ%EVUk) o - `{QY1fCNng6_by0]JCZKq&h>;o \t_QogC,RO{qKko6M}57}?\3-ΞhAü%ꖏvHo{ak]MTo=h VGo~HŔ|o"ڤ{4oׂ(10DBD E )DoJt3-żr=&E9"+тeTМDq & awqoĽ!&Ns!J͘E0on|ȷ)k@H^0(+Fs L[ 80hK@2ry[aZzxwV P66y 9kR욦s|7ϤEٳdWϬ!Kb8-h5AKު +H i5VZN S:ǓZ|@ !9CX!B6tԬUKI<IR:IPDRBHψZcO(ia(Z#a6 }R#m|l m_/=KM}IQ=E0"3{z=\qZ<^RxlϹԟX7ܵET(ݣ^w4^ erF1e&ۚJ7G[;Eus<~̸p%y7e+8Hl[2*&0`F)غEzs8.lnhb[jݾM{w^y078L3ouեt\mtߴ|7T[./n[y6z{Q.k]!"Xt/ŀRҧTΑ/`wKrM`Fs( .`wcnw;MGݪ+0fQAvFڱh}FEsj+P 7 mcUUת*7rDT6{V0vU*.|"0qk&29b@h4QHQQx#5R"P(y-wR! !$U6AR.2j NO@4eYX` .j%h~kΚMyyJ?P[X{+[ْgO~g9@}*2J&pIByi@rk 1fV㏇]\[]n~\B'{5gDSRBZBI e\DzWOEkwmK_&i"q؍(Hlk [3CQՔhql3>USG9BԍN zN:|= nhx<m?J˖؍7/cyWʫS\hLm?@Rvfl>{4\00ydP]B#h]Gpv8;CMkC{%/m3r]nyvu\nd^_7Yi s!,v}^ Ϟ'vnjo\YIZX+\H£ Z|ɷ>(,`Jd:!Zș0o.g-eq(Ϟa}SP((Vy7zcPɍ֭;/tk*Oo8%~oijfBÈ;z_b{M$osj-iǴOWL;LO-:tFta +WM8EcӞ Lh.E؞{NYɒQZljYN(>x.N1#6"jmER`AT6R1NFmAr4 Xd{JlǼTUX_~|b2$4JRmhr"`9a3GƘ Lmr/D.Գ0?[unrf>4rDfZkEe](av;Jxm5;FoY/gb3Z!Y.gYb,aαh# EIbAipΕA:`?M1jS֨YKyHY p̥"Fņ3lo!MwHCoFn|Frj8\`Z+~in⎔?k_m-@-Fи1XkdwI1RaJI1ɽ2E[RgG/6*T8Rn+帓FU:fܗAZ?V{E8!JZDčEpp kQ @@F ,Sɣ=+O *xROײ <lh\kK)xvqXm3ds晽a siˊ# Y$(kaV -"ӻsïE];`p,_u# OQP!9(2rV%ZIeg;uҟ.CR)SHm;KVADT)sD!6DsYwsMJtO|F;+^P^z"sBJБMw㳳(q zW9ѯL͔)፴A\-DZЈd !)XPJutyȱUC(>InvwӴG:WLa;Y ƄZ`biS 5AR鄡6ޏ[=h;`C+5k'C*N)?'C E kV57%1X"Iё@s۪>߹tؖR>}0RNق Q\0A\t x6!Xx=߆^C*5'}Kbㅔq $cE"qp$;h\io|j]Q*ZTufAW]4([G2%x]w:<^fIL]ʻ7Yߢ,G!?uTW>¬$<=@~V;2OBUD_bN}{Tybl9zQG>KF6k&mZ\ZK/Jgnՠ1jb/ ˋ/+7<VT~ zZ%_tT=#i:١x (Z~YgkE]N5\9'Q153S:`k搮- :AzlqS2>%\duX'ۮAJ6! ̤-5u^]3m?Y +Hh<&9R 3^ǵ]X~M1m !J1NnRbQ5V//I&lnZ0I&J1 ƭL"9RhkLܗI6_ۙ%>|Aqx,S hC3~Od{㈙B8xX +I MTqzA󳙽>Oel_xzqKp:89 WG\?$dCB$@FQlrXEI" I6 ## eVzS1G @82J#"2qGkZ  `&DciobYM"wɏ)fu:rl_{^T d]o{' W`V>/n?k~7 s͹jw~H Δik66p4ZkDjЮU@xw*l4B=E ]QQq!,0BMroDfbYϨ!WK~>|r+Ɇ)%ma4^LƧDI[?zy.I49MH[%fiysIPT* #MR*焎á\Ti(㸔ޒGp! !*+h-J^B)# zSNq飶ě/#XPoBd,6V)& QƦ|/ * #Js'd"Fs3G/1pzj3ߧ/\bS#F"̡mPS@YP8gKc (%¯: !){P&ҥ+ %:N~KgM8p{$sa)=fѻHdGeDn^%G=}6mP)X|ALYV:ꃿk `!\AwfL-!0ZZ\/FFA:--ݎK>;EB*%ke-e`Ir^!X(P*ID<(NxQ.I>srq ھoR&sݫ0OLUrpkdx KVy͂^Z8rZ&_..oQMxp{&6F EH\Xi*\l TB ˩ ,bjQL)jG:Rb&y B OtڌQRk ǛTO( [ ΰ;ۿik rHNA!rP' ֫\}lɳrEs΢>ojRE7GiCđZҳm*"(d^!(Ηh.,^;b틗U  %1@v2JaQiƝVc  0B'LO/?f= 61bWn[̪Y{,$*N b1gBT9ü N0  N$?=?ԇ)l@8b $2Cq npрO#)5 wm$I2XK `zL7Sn߈*"Y٪+_^/#"##MHNJn3Sᙅ$$;-|ļ^dD} ,y)Iyoo1n66iƻyK{OgegM 7L 0/w1)޹ES+#Tjh~hXY?8zU<4{m>'NE 2Q`'但Ոpt_N3 4_HrFUSI%<h6Un6g=r GPUizj]L砅>֩Kҕ$R>Qv۳_U iKb2"߿]5sRMҿNSJ*wN{hZ?Ko՝Ņm0JXBW`.(ϫm׌#x|VZ_[50 B6ƒMŰH&mLqO*˲We.UNn#_/uYMnfJ_ui9G` $ӄIZ6N +ѿn~s*W۽0<7_|_Qf^ϋ??o*&zV6eѰާh]g ~rmUPÕZKevVKn98e(0EbٹlY.HV`G{L;ϫ.Q%w:, {h l 0.fOĊb[c cTmukGj^ƿ;ncPg<9PP!7^pPQ5^1s3́)d^¨XX&cG]`,HYAx >7^,A9(AL&6̜HwK|NE.Y}z9|\]"UŷEȻZzȒ 7W,ּǎ<(&Ut4 T00KǬ>>5&3X*3*YJ]L28ll3995c2(`R@^o;`OYޤv~fȖwUzEYHT|T\T./eW?>5$Vޅ0>9WRCZ _W8|u76SGXxccLp]ɫ T|efgؘgzBfRO"@` =gO~+?y_@G| 0:&~O~p9}}JhCc̋"a)= %g*| Jg)=Wᓧ !.!?' @\IpxGȫfy V"_ֽlwU~RLSxqUbYw{Uj xg6ދ__*¼ @{=xC*SIFcx>YA?0B\'Vz\CV;7/| gk'΀-,@Xwy )LS)w(ݻl~={.tX$V*d)''r*}V(u憳DQAÊcVT71"|4DȎI F Qy "d0%5kAJaV`=,׳3|s~u?]4׍jۭa~)sRx0(41 TEgxfhY\9@fRpѭ*b.rs|Z;T6VoJ6)kP6z0m۰"iPٹoSJUyK8ާ49\1?Sf׻64rɂMчhCWĻv_({8ԜEue_5{oߐkŶ$iY`LxRVTl #9i DT+*@EfSe3餮J=Z/pS6Z.$IJ@*2&m1L< Ϭ$hc'k#kq߁yMf8>WOp{jL_\nl=VN;*uyKw]Y ;rbe]W10@#/yd<0D; ۗ6LiΈz{z+Ū`Xex(w{S7~y5S*k:dE(`Z\qR %M.>^LTRҞ~-~5⑺cVŬDgXSrBeAچgB[zQ,q:G_쉍Z=XX-EaN&˄t2t|}{iٳ%ګgם̺ݡgp*礈LD,&Hjcнkugu䲗cdGIJy3`FIk6mf\{3s[X|Bin e[ *tBvtut%Mҕ..kY1DV QTf.m+D+DiHGWGHWZ} 3i[CW״FE Q$1ҕ1+"nt()u`ڣ "\cBWV JCxGW ]{v8!"v80]mWXP4t%:ڵԴ05tph ]!ZxUntp#]!ZCNWvtutŹk;M:* J *+a}NN1jM7@ݢ2IZDd4}ThZtbͲд$+"G FԴ-WM+D)::FRD]o>]!`K[CWB6;DlGWGHWhdC=5T`M+D);:F2 &eKa[CWWٶt(ЕUsl[phm+K;#zyBf[жځc;@[4kAWr ])DWذ}Tu/6%1a5m]1&P+T[ 4 ]!]qf-+T{ OWwtut%[l7?+Q' )f?ͿEU=~-=Sf2KʔM5e1Ķ.Smml- JMLǴ ML¸#$04_"ӂ@e]ƴpniњ[BsL bZ\ Ӧ,e{ gZtia:zRHaZDWU̴PM+D)uGWGHWZd x\FZ "ZxeQ ҕZ6ٮ5tpm ]!ZxBs:>F2ZDWBQ4M+DiDGW ]{v:!R0kNW[>|:vhuu+it:ڵ|^6+k o ]!ZNNWR؎p+,h;ZmNWҎ8WlL0#E)TGư4WZ4\7Ru4}4mג]FK?9Z0_F$a[G]E5|G^}o[跙_^b<$K(\Ĩ .Pz/_}i5bkW^j2 Zy}4.o]+ 7SxfdVpr8n}Ӫr_v)֍)Q\ r] L6^E{X{3a2(S*145Y+JUrmU^r`F)wR"@.^V}A>&@n>eQ&^cRKäԥD#a!olg<22壶4GKfj?+qMW/N?4Y\7?;*|u>,rua7y p~~^xf\ @WglZ^k"&kSn"P3yOtnE7Egw~m%&c<ڧL3Du)97dIJh-![sb(|mÅ" So<bx6sZKІitֆ 'Bk4fDr0g}s6ܶP0pߧQ< k0fMV{zOv 0mF%4h%7!DΈXP A?Fc-MHz 3>s$L~wXOZb&1Q@&Rb] o#7+á%4&M vbM>madQ˞Wnl,m[dMV}_8~;iN5wjD.IDjRHŷb.@ ,3e"Ri+/4P!i4Ug[ccY8{2s>I8/~k~t6_TؓٴHxSհӕOX4p$IgvM~EL/s~ %:ࠡnpqhӄ+\ q.#T6Ye F"=ʄ* NZ#`~`*<ı){$rU$(ұC6ʺ7\㸍a>Ϡ8y6@[ߧ6c!^V"u}B''pR QF|\ ^E}3Ah'|}\+6Ԗ.ŕ㯵o/g70Zsi4IY3̬v9RL.q| j D[:jv7#xg3-.'Z?QV|:_/Z9:wuNtwݭ.:jW_msxխV_bNNgu*Td1uY)crWĿFvGg8oo_~xoߛo-ΟqѣQt/Ϗ@?B[MO74hm>M>{^äZĚ̎ wOF71֋v*x\&2Q\Y]OBظϗ7,? ]ހzjNYf3È"M;r n,xB8.'r0 I!F h@,A_o b: _SF2EtG]\&gW'^A@4p z169Y)&녁y$D34*Έ8rMɐ@*JUwmA7]UZ#9FO ״5ꉋhU6M"Έh)ա|k&r*gU"Jl#sj[4N\ې6qf  km-۝AGw˸9uzw[vlY\jaY܁J-E'9Ur-Cit\h2u?f!Ά[vAǴA=Sqx>HmqEOf§$q)?{E;EL Zi$wKrPiHδTqxL &a[FY=&'9h6t}@ȲBhU+nם-~rmwCi/gME ˰~p3I~1a.%UUJ#GTwʺًeH7CwHGHclfhښ"=)ZPR%'Ĺ{oY_:ME^e:e^qd/,6@gh&yG&4LrA8|Yz6*y%ZYq9N_/}WohQfUc?|v7xcQAK|h;y:_<F;ǸD:*ʃR ) )" 3/3Bx&ygS=(͒vQ f=R0\),wn^(v7pƗBS)+"4Vs"J-kzd4[C6L}4@SЈƚXSs+$1J'/9Xk"A&$SZ5'mlΞ܍ >ip(Si׳7q,Kyآ$ A, SwnPfy u¡[nu#:z!t$ {2tFtCS~ $NDB e)c9tHH!H MSGddkG:tۘH˧ e5( 7F`j[,%*C],rClx'e-Bu*AQl>\kA|n׫oͿ&&ӏY?v~.;_j~9vZ3붱="BZP`Fͫ9 \PZ̡ R=  5%j: RX;MKe3J0i O$:U{bJ')yNH(x{bz~O^5Qr% PRj"y:#s& O,蜱[a5Z԰jx{}Gm6y-l'֧AbCa3=,35j[6ygx[u`$b"9)y]"^:J}G0D N-NA&OHȤ')%hO.jB@$Di$p5L$KM=D 񠣑 e"X\牖[i%1HzM=D|@!7[>Zn^X Bwsq8b*MӀusī/WpG.yTYqt JX,֬䖸YGK.g%#t}axų񪔬RXBI/H+{ GWQRk Dq"k+UTfeBeLXό$!\4 aIr.<%)QH:",ўYo܍njC j]™yqZ }7{F}^P{| ?%W=QҼIիj1ͤD<œc&CC9 mCB믃K1ɋhA#3"@QD5N18o3yHF%`)MfpR9*{F$|N%C%׵uę<)&`@H<{ئ77woF?R'#7e u4ӋUAPlX@ Y!>?Ʃ}uTzyq:!&)ʙ(@R#)u޲> !O6|%_̸iMVh9UUB>yV o~ů5{@*| gvĤ,MUM_PQtFdZBZm#߀ӫIN F!v!o4;oB+y6os܇R9`;u}qGɨil,w>:x{p9o:ys~9ziUVx8k;XNEE=b{OWerȌJ#9z^& ׼wO&fߑO{8rABiI%Z1[*aq}q"#>XyD2*=DUFVIy,nNL [굷Đ816J_@Xb} Y&Eݨè>d=kgyZlA`xJf"BB$q8CclYVd;*~t6? GөzK<<`V51D ɭ%/% IJpfxD>@@5O/'uyޟȻqtR~S{<)u\y\`'X 988Vʱ~\uZt}G!(ٍ.B|'ԇ -t Ɲ}j\HIUJSθTH:u:ʥ|~t]Ȅ1{jj!!3սFhǗ!VhDm0 s/ڥ2޵ۿBMF~w p7@|YA?eiR%ZV dQ"8#^̙~%tSOn-vq/x=+'b45UAfGk_t~޼'6x9WFȏ~u XPiPɜ}eוvYE%y_8:oxgy'o@ɯff^_/q<(֗"\_`~y^xz<<'GKRXjWa3s SJ/;MOx zdzzSO<{+ vx7~i8|s|N߼D;|?n}4;W[rdSnjŰ?}]ɍ~jtJ킍}_Gy“꯾Ձ[r ZZ]BךL$:&̻%/B\ߞ ~~%Gq(^h1})dn(Q [rixƥ $PtN}:8fV:_/#گGtuoO}<> "g)-2wݹO#YhT]!Dq 1eopʎt>/>-S:CE@V4Zq7kq-.xfkd:LfJ.=Kl%rd1Z&*GMyIңi5ɗWd 5iVC&3NGѬ\zp-E'{~6\XPJ,9LJ+DWUZ^Pjۍ =ҨKLԌ"p -촗1.KIٺ(5 &PUU|ΌstpRqʔtU9R9@$ٰF5l"WBABr+\\'ʊeMd(^ήJ \9OF0&20qPP%&QUF' ֲ6KI5D,uޡBQ0V6%*m:pcp6 3kridq>Ydy:bM9ɃpQ <,P*K' O]L G?:l 8iJq x^+[p4/=; g/OZ -93ՋX ^N5[IPrWeYmbWe!϶wn =~aO s{p"T^aLeEbUNˁֹ6I](ƽ),|R*ڃRkft.$ToK 3hD4ks.s곳ݟz053Ү1 g T*KHh#\d .GDT  VVIݽҵK3ieEb睒,NSU]RK)./·%X *ϣh$-S毱k^Wuo~}?V0X0/)m$'<}6wjI2v IeZJ{jO"eDΣէrQPYжً*+̀|GzQr1lhVdfmnPEkP(/yǙd0ü eٜ|\J ‰g;"'^L Jm[yڂo3z:p?Ehx LZmxsb,'B1XKT%r'o#w&={_,\߻|%38h2xsC!͞)rn\qHd>ZnN;_pQ:l|JJ]Q%^ٍ;vGRUg+,ͫ9RfT&,1qP:‹ ؠiQ´i$)$7+ystrxtj C%#4),ܰRpyNM&:xʺS 6SkIu`2PF[y[zuO+r%u4;B&HUE&0VBʛLBUhڭ kN-KAљc#:Zu4xDg-9:2*se14* B(H JSsE2]$T!6h4-6amiD{IL*ʷ>hp 8$? 'A~WCi*a]QdqRŏ6HU%J:/*_ U\Yp%C: @(kOŭ4/uRd U)e…LIax1ZhAR *8H96[#S.8Y+b.e#Y$ EXZP)66Q /  4Fԋnǎ<Z0ϫB/̛*HYT:M`?у{8%90Krnfrl>y\XG՗Lzvݱ k tٲ0|k(Ҫ &,{ۜf]3~,L J\J+o TDBr^=;;ӝgEҸ"T;]9JE|xamgXJŤLJFpe- WσQJnUt.bثIZ@Qrҫe 1-\'bZ@v׆Ǜ咽7-iA>3Vu ݵtE(6z:ҌN;רzPrҕa )t"̄ṘHWY)dB tgHWVɶ4q]+,dwlWWu&FжvE(m/]=kv=.Ƚf#P7lj{P#Еjۮ~pw]+BDqz:@A!BBCg>r("NdҕRK!"t.+DkXP Aҕe`ah)DRͻXFR4+4Mhu`BiTOH Չm)"3[ J0䬗*PP֥QCW;J0̴%w=] ]X-\ m1\"%k_Y6C{*0KtE ]\BTPvutEK+,CWW5w%w}>tzw خ6{Z{'J޲#]zA1!"FvU]+De %g=] ]q-este+tEhJ+6֝ v䉲'JkFٮUw5q$ RFfÆ=,|ދ=E,%$5;1/ŝQlFH~EV=RhIgݝ>tm{rd}"a?`i wpn-9ohiɻ'ܼA\\2$]ZxK ɺ-;6CW7nƍ`ӕtC-ݸ`n̍/ʕ2nytbvKJgHa+t%hgʤt"*&GMlvԕζA_ JkW/R]+&] \=|*]@%t,p ] xŠLJWqC+sn0?g=ц9cO zxV)COgo6DW8fj7o(CJPiJW߇,󟞶`f p[+AJPt `/L&tb+lcXrn6͖hZǸ)l7Ӵ$V~4퍏~Kkvv3ôlӕ~3ڃHC/K~׃]ز ] \BW6P=GЕu{fqiw k :a{gnii#\AЗ偝ɫK gi3l: a3bm>@HżDӆ*p2&l.ŭЕu%gHW[ lg[ă=QکqJW߇DWζ܆ƃwi@WsЙCvD\st}G;=;̂n(WͅGoo޼ d? x6!M{}P*ݷ5}&?t fCz˿ VJ=_>|}ADN]>>E.buv-2 _O>|(?<=⃓٥t_`R~mX~5Wzr?u7⛘=zyryKG׭9*?"?s7tz;* #QǬ|Ay}~B, Ӣ/|B|}k~y p ˑnc$K~r9yp#2ǀ-?˷pR(L8ySB Ԃ! c*(|ǿ|dgN7 O/_]\ n;F30Pǧ3\.B͘L3#xg{L.p S11l S9` ד>?2L5JsΔn\0[Kݶthz(C#=wB"OZ4a2f=FKdwŖi*JGohsat]D9'Úhe DM&DyQr7 F&A9NrSdW!VѢk߽j)j]" wJʏ29cN"½XzB8 hĜq9{ZlHyZ ?D4!9ݿׯҏdrP3(eC4َ0Xub031M~B )60ИUޕ>g f0֤1%ϑl=Bl:/|@nռ\k.$jCf0L,ٺhbύPaRȄHt=KE Ȏ^%$K43f0YS"eohbn l2)˱"< = `0O4 ?veől\\5 ({<Ī5tJ|Xɠ-]^ 1q9q'YWeQ\Zl0=Y{mGe7R`T! fEG$uc6 >k5sCEۆh%Det((.qt2mm ,S\D?56}w EĆ ̆ elllB•LyM +QJ kOHP*WBe|3s؂d\Ln]EFPB]y܁:#7CAJՎd@PS(Hu~W]TI3rJ%(i E>CCլ`3TOv0$$+!5P[5dYgCF TRG4lM#FP>6i1b "hʲAB0h~Cw+telrFrr|R1PTSq ֥̐MbXtLGɓ]@N`}#hW".B`) >@ E&rZgd^0P>DǩZ#./1ZdgxILPcj $k\uM:ANdm~tXTuqU% 95[##V1l#xa;PMuh 1H_k3C&8?;7@j%6L1Fi&ڊ'Kе2"YwPK~ "ŪuԆHT&Pw}\pGYu $$4(ڽ,C'^-RP2vu#ڱ#,dwt)oW%,-ef c58+{vyz<~ /gǻ۴-LA.n~zJ F/Ƶ#w`\=ཋi#bj?ЭYZR&;[Aq^)kh UeFNIpPaRD9LH{ɵ#jF =EttNJdVtw `%0UdrE = >dJyHsXoݬ'Ű6̓:' +I$A%>Q' 7b!ǘ#寺țbCw F-TCaE 5FHuG%HnmVPx{F9>էWc7[{Շ˓ׯW$C4'Vx\;]]}tƓ'wۺ;/Bn4~,gh㮭﯎v{R^S&z$F-K'LtOE-1[y͛9hҿ$(YO~> q~> H}@R> H}@R> H}@R> H}@R> H}@R> H}@WŖ|@{ d7r>Xs> '7}@^V> H}@R> H}@R> H}@R> H}@R> H}@R> H}@z> P[0[\˛ez>> H}@R> H}@R> H}@R> H}@R> H}@R> H}@RЋ?{ȑBE_f $_v0:EjIj=4)I#5^YVשN]Nܔd%8 h1q@VJjY"!q@58TjPq@58TjPq@58TjPq@58TjPq@58TjPq@58T%zy@R_FG?Oq)]\߽֯)j(.C@,7ӂ–Pń-\E [B =l UJRÖ%ĢgDŽ*㪛`kNrikr\uS9X\]@Px7V+w瀞ڗ UJ]qub +pjqgJ1RP b:ֲbX^qu8)_k Tnڔ\ 6;ݚ igAV9<[g:XĈȱc#?}3ϖ[# ;P~35Q0|{|[s^XW&jg6oF x].fqߎnf"`5G,JpB-}x9=s+i;P|*jkqm\=@f\W}P"CԶq6y.+W(W3j5:PӴ+- \`[μ}R)BlB[k_*WF@/i S΋\wQcW++4% S]ʵ~bc"$X+g\ukNj-;ȉwڵB˂p+ W(wuSq*:@\1 [&\Z+Tt]Q `jDvȷ"([j4tQ1*A*ӊJv٤2i}pg6x1Ý`4ËZ<9i46Ѓ덱 $SYH/DB}?Ǧj^&c4i9? &:` rDEl>4{h˝̘-;3֘qًi@ =oFgzS*޶l]aZc5¤~odNjKopANY&k)ۻɮ~/Y/&9֋G(T}>3eJ9i"S9iP%'W+4 \`Y@rU15_*5WV[EJjGB++M)BzCԅ_^`ޏN޷E&W9ڽ?wS9D\]UPQ ZY @|BLV\ %Ȃp ] @rBW\ 8U ~n *cAP`s|[A. r:(R0j:QЂ+_j-K6N0C7WފW"Z@U\\CJ"3t\J*WH#DA\\Y @-b@Έ" 1J؂p5+3r(WV UH>qUfh=(2B@SzS1Gu*YjyRl,fhApPKI%-ЂJҒp S'3*LWe1+%^弓p޷a&ײRpj_<B[ +WZ3.%\\Q P5C^jTqu82BKQw% bb$Pw*UWVRK+KjGbTջzpXHp]u:5dϸViq7j`!]TծEO$XHV PV JJ+TE)&-W(Xbpr5-W֘ UmUի .V%#T)?j"%J<!J1ZR'Q Q`rbiT5mS1}@z۫\6`@hwyī1ͣ>|Lƫ,:#oRppzػ{Jk\ae"SdMr#rI6HrJRI_2IPy+<ꦿwo,G|o-hz/|7M7Ompd_ߛ!G?g>aj?^N&o/ݮb'|VX矿Y2mu~=v}WO$AžO-VW~lJݷON ,YU^@R%OTbMNtý+h1F90j?.7 C+,\41iX:&2Υ ܺ묭͂,BFcBfki 춘Ysé2 tsЛί zXX]h4+ڊ}W/Sin>X/X*k-쏳y_|?&=-Vr{&wrݓG7?' _\o8k.pXzO'wq/>۾?qv~Cu|j1U2uLQ_N G7%:?s[xfw(A($9n MVNhS.Ip z?>ڌ_exP LƫT~2NˋܯcбՂ@r % Lh9Yl8W=7Ix٪w _gr7aSV^}>4~ex?>21Я1cAޅ]uH?+M1&N6Fao\&ah䞚uiƄ Et(^Z:]"ȸf2Rf\ *׎8^N&B9y.UN^N Ԕ )E/(18-"QxVgs.{g+M^g}كqC7 ]h_xp{cEU4IJJ-ODΑh:+1gbutF${ImS5 ېd6~ɱ@C*pceZzy8R-ElWM]\=~éUPx9<׳kL[3>hjtx:uNBDUX%vH0( e*%;bssfDJDk(!J 'fxZ#Jj =i&4wz$bor'E큋f]4ٛE3Mb,,͚K=0ip%Ey$sk$YA?DX\ {xx{G/< 6q]ckP`4¨qyr>LUY+FZ:0mdF!$uQ˗Q J809s^PZMS*^ÌO5e=!4.mDk6&Jh"[vZT.O9" IHv61ۮ̉y YȱLL'Oz_84]yŅ OwmH_ѷpoXegon&/[,y$ىs,jYqڶfGU,V ~[~,iqR! VS|=adXy3g84ӠR#uH4 hPL,S֦Iv3/Q XRAq`T?`'v#t' 4vUljPҪyX :flj!IhTiFCCYAiڂ~O1ؤE/c1]."uzBo ͏*pq I85nI}M6K_>qVQ,t2^ hssouɶVjxnu^i#aEoYr<(bc'0. MuR!J5.+MatqFO$>?Xÿ>퇷û~O4hS8i#|; 67pzM[UުiKitfhuvﰏfKQXo5 ~&y({׫ģ욤 *zq Ml~*jv*ec-B,_ k9ҎꃗL )HOmlX/hp8 펋E[q/mMi`%u*V#ti*$:fiKV2՟tr3Oˋcq`lM<С:v|Է;:]xvmyvVt۹1TMÖϻrBq-U1W)L<yE}PGry-‘'$+K.3N:tΛ ^y Y(f^Q)F^:X)YNψ#a9t :#Xzu1 Wn$sn 'EO\ Fa}`)(09%wcN {BSS f ʕ".kA9#a錩rEIgW2hùg;mSULİx9DpIO[ndxUީhOIXx^; La21z41b$~X@1$misc3ǜۂT=w}^{y 2@[k+ncPo|&sY gUˤ`k :yt>T3ẂkUmgVq+"jouZ)>'Njg4)~΂Q 88>ڹUCnם8C]#Uy^.Ζe=CKhFBDb ҆K0;,ʑԎ&s9{ܡ+ 6$GsX]69Т>6p1S9sNl:]6%p׫, [XƉi+z4K7/(_|I'^NBr/%_|cOTr}W/GKJb6 ҁP VEy%s ^^I[ _cc:-_<]`>gI-vx3u7+ niWf p/r\f/2jf6;| )Rtj[-~Ϯ—O 02CiVR0,hyȺstO=a86oIɏhժI!Ӂ圅 ggSd*ѫ5JI,0e.D! `<*QTf5JNqs9u)踵Y_ɇBϭ88 &Ηۧ>UWSn,&3U#ՏfܚOٗ} 6e% 2M 6Xi`oɩ؃!CN=\֚K,GnPhɡ !IcO4"[]}2ᄧlH BIEgBZECA׊<JH1R댜R7=&W=Jg$jt9^Mj71ZS9E=]0HԫAZцȍ X/qK5{x虷m5 ~=V/ u ny^~\k_P"XHbRK R -= Qꃮ +:X],f7pLb<*90([+*W=X^0y0&g\ȼV-rAʀE5I&9ӑY-d$m""Q|PYVʗQ.KYfM;l,g TKC9Lȹ_-K{;^ie^ܣڭZ=i^\ԑY3Lܽ hji-TR}ts@% JJ`*$VI/Ü>18L#r0Uv^3B/\#Y$[$Em(^^0eåPm6PCʒS٨emBϧW[t~F~%0~ Kbss`yz֐=T<'bi)#6  F-]Ԑd"c&/p0 jz56uR!y~ƎUpAVM\wnˏ,hFw}kT9xDU>b:jSeh*ZX'_yJm=?iҼ/y+l]-m"7uHgy;3VXԚUZ?ZbɈ\4vJWT)]-Kﯞs2uxf]!dլë7wj)|ͩ5oܿJVO(5Oܚ7b}=7a#o?^^W' uS)MOA;?E0vL6{9riy$\X9r%95+RP1P^TGwQ}Cܤ~)^toghPH2X}/E"|; "22ZQڧC&7d0eE\.{ [;vS^ҫ9!pv%xzZk UdߞNQ1W6  -Zx_2&d4^˿?E#K+ aMfr`q<cga/Y#Sј&5c2I.m&WիwOSb23|e||1C8m5oЊ}^HbJ*9l w TQ9erqnAѣם +0JGIqY,4uf{}_`~QJ`. (4yHp<`{ޞ;Lr<σÿ%7%F),d}!Sc"n@{̗^}8~=O?5_[wo"ǿ'977i~iKsgidO%Fy꘱vͻ/˿1G'/e:/UpMs&Ku:D 8JQSTN\["~uPry /bKfq(^p1o|Gj)Vy&8JiJwA7~]:N߮[:w>sk߼\է,q}.V<2cm'nU pv~2Cn|dHyȊZ+f8~lmL,Z Cy؞#6;N6 kEotxqsݟb8g!}&$9``ţ6lbrb:;FIKv#DUc^@ytq'!Qe2%TvK&@5yKT(I)KQuOzOI<{ ܺojnmn\3_5n}8kjn?ctꅝ窂ol0=~Q$֣gZ4$Ū 4]C㠽$Lx0\ׂRVmͬQl3JnG oQQҝ|[aY֘ԊvzU#Ɖ\qʓ,YD", Yc26;uE|Ft?M]J1 2aCi5,sSDs$U`gZdG$z~Ҝ0/'O0zr);G%9%Fz$"NdN P!e4Uv$!9bXg!fg2Kgb4ј >3jgS|S. saDÂRe}$ gH ,7GtVdaO/B$V%zay a*Z>w- M-' h@κh ,*e0QIeO [9>*C3ekdK3 3B9q7 9LoۑaNIId@fSh7 [iYNBumxuTVie)s]7Ӻ:`^X$ 4)|Ô$UQJOQ6YHD tֹ>2>;[+y=8 ȀDZ |P n1GJ)>e;-ύD׶D^~klVikG.x>??$p:yaapy=~rhR.IO  2E8 _| _bٖ ̈́4᳢K@>k5w v[籌pmbh XEլ}Ż>|0r<<9`>PYssqy>늍7)_cxS 0_9`6Nˤ͡|>_l?[As")dZf/NHjʕS @wck .c ^¼针N`2ch6&%l BfVh6=^e*TBeLrdi+L?K8)aWWzypQF#7J5LC dcSJ#&̘w L0ё欗$Z^^x$!Fcu"kQY6*)L.騢,*IʎvWHGq,05Kk²`%\4R>q,oYZ5rg f{&*C%0uHNH8uƬ {@rt3NxmQ(}EMK=`ψz %t$ZN{eƞP%ks2py9(Rjja?Iߺ}/A,lR*&";,UiZ+X=&P 9(V'VS͉O*5uik4Q+Cm gE$YPJRY`y6h[^ڳTc" IW)ҹhuC.$JƠ:rIpPMxdHU:8hqNѫսx' eݑfȶa[F . _~R7,R>F@_CB8Mb;%ti3HB an1F-i,ZpLZ1dpD&l@^ c:qƮ#kr\Úwެ:Dq(f8>^V]Uw6/ӹFl| KŻKAs}Z]4dk|5 U2Q7@u4s`*xk"&DFE+=11_JnZLz{B7Go骺Ɔ}Ab4.>\>ԨDGUu wo૆~:6?ޏG+t,~WWjK,1s+DI',~q-&Ԓ5jh5*ՎT*jh_QFMlF '708Țꊱ  R@V+ i'if{5qǟ~ƣ$^̏pqXGih)x hY8xPc,{|/ϝgTDa-X:p ^{H䒶fi&G09i^lаWƮ~ߜ@Sh+Vggn>gČ)I??6skᬪF8c`B>Ȯ :Ѯdmi?m e0:Qk]2:28(?hQ{l<'ud v:y!˄>@j9lU1 bWeњ[%',eAX+,Ŝ ZeNWrT{pBUAttE #W+9Aa8 50 5lO)\^ϏmBHP:*!T<:ezhPM/C51:{ƃE޷n~>.^ 1EjI]nhz۠pQj;q0S bN!MRJu%x^Hr 5t5?M,vd= ɀPUꬱݟ:RY3:K?W|{Tǒ.~>.uq!mm L7dM%m=;ַSZ`I1hkÛ!Jo.Q[3.t0BBWVv(,v=]]Y +˱!'5t(maχ̚Col7 nW1b3/6CiIl@Wm2҅kU ]!]ouBTttŘ DW2[ ]!\QvhU Q^GLI +L,.}lr}8]!JzGb9@{"?(C +o7ؒ6)ڢi+U)4h:M#J{C FM0BJJoWŠ KY&b(TDtt$G"BBWVʮԽnAhIt-+.-g3h:]!Jn{C.1EWs"he Qjճ+#" c]mx?7'Fh9ݑj3cte{v䔗DW&@1tp)]+DٵT [Y ]\CѮ<]!ʮ%I K SUQ2gZIWnnEʢil..44({r?iZQM1u욙P٥} %m4} Yn.H\n"Ѿ0]6ZvFӔkOR*cn4B@O5oF*Rbnoeμ[l,GAJ~F9tpu1ADku "t4ն ld9tpU15t(jJk)-UDBWwQ ҕV R]`EѮ.gU+DmOW{HW0iAtM91t h7vLYg#b$ft)`Jv1ܝtmV삮6F)#֥=]m3hlAtw+1ܝfO]+DilOW{HWRxAt¥B\t赫}+.X*%ֹ-p+`%|Z2 ;#P7kr6lˡi,V(iziZXtIt9/0Ut(u GRZK +EBRBWkWɞ4 [i+l)th:]apkWHWhK+LGѮ.-f3hwwc^ҕٻ6ndWX|Ov([ʮSujnHd;m4)Ercr8ӗ1W)m@d•%J ѯ]Y)9Ʉ+z: :V0q7~3$ΏB+\III\+C]ObldXd•אTJ p FkP WV\S WVZ=R*3 WCx 'P\Ed*J *pe:$tDng(@(s3ڒI="TEx{s4{ƨ$R_5Lr*z(˩0nylvrtxwVJ;c>n'Kp ,Xu:+ V\kfh VJ_3?Ǖ•X26VZ ++9\=p*p NRJBJEW0\)LP``ld6YiɁR9+ JF4 WV\ ҪA+s Wm2Ԏө++++R9\=p<]۝<\ y0q7fWa3]*LJYbpPSDR l po}A`bWVN Wq0 +XtumPZCUI(pÕu@gvϓa}~/Fn#?|).=ޭx%=6RTհb'_Pf=\/MA\Y;5>)'WegQT :xp6=M⯊UTfUW21A;TixQ4vVOOo#:ӝ6r_噿9mQz;+P݊o*fbk+G;x)Ls1;L"?9Ht1\y!0~%oډ_-*N{t\ @..蟟['Ŭq=XBE}ı׬9,#*N80JE{k7?>$qu;:OOY皹+!D9=;! }&x>F([L{?Xd4T?Ok-@95΀|Gx*=7=9XHqM2i츞N'K$/ˉs1.f1Ž犿r};^<7r}ȟrH PЦm -t[h)+hÊ}@d( >OFY).U iyVW7~N3_y.(FEY#l6Jypƍnc.9iof{7YÍ;و6Ɏ;c7;6LNv̅T gW\ b+ IлЪ[l%ΘѪɼO?ݩܛh-x:iT0fcJ$0@rV#È3y(bw'pЌ߃k p Z 7 @Ff4Sז\jT))EpB28!in}j">1UJg݋5OUL0N@"4ᨺ{r) [Ly\ )~~5=,7&AYi|ł!zG+)y*/gWc;]m#s?Oan(tV56r|IaThĐS!L'zej_ GS"fǜd/^K)AjԽKK6aҺ$!&]I BK b~y*gOK݀mEՊpdսPݷ 1Jpc 5e  +;S|b[Fqʞ^m$r`ؚQ@$;WZ`k;S*-h;35gzq6.1*n%%uq3Ng}P?4Gk[%Cʒi WOv[wujP5O50&9=E ᨹ0 %d5`d7 _UC&G~W5_rͯ.W`oEPp%h*b4 UÒ:K%%Qu5CXY(ymP Hh֭au3,H9J5%7P5}Yr((ecUOKNNì2 1)@X&94vnFo"GAV f$?c3\C 4ٞ3pw蜦WΑTnSӋld%S -#\Cdxf-V(Y[ڐVCPR&:-ӶªnK l"D+M]}߿L5v_}->^({3wo1{`]Y`#ןF|<CTCay78̕,T6%Q:#NDwޔ3^A243R(H'E :qA²;SA^2d ϊN/t'P.Hw:3*jV0IyId$'R0$ȝT8kg %9d . p^kV$D|cfYW!Tӝr^eyHw?~]gūv9&rQ7uS!46HVi ƶԊpN&z(NeSmaB)hB ☮ڗo#j3{ˎP೅ r<^UpꘊVSyy%Z6P 4HYJ.,^Nt[L+)xzʟ8FQ.?pC> ȕJzȘ,붔 pURAq'ĐF6-8۶n!)hK^B`p-^!n6&P [cn,G贵b944vaPmuS!h79D%0ʒHjIŶƙaRoml֤G2ѩUZ/ qpy3&6u8.@{8c]jx<.|ڙsM5ogvjuuY5|Ί t7~w^ Q)jXb1a\)Z+}z;}^!߾ìE>=0YZ̡#_wP|S?LjwTʚ?O&xpM6O`@T\P`׿|G zQ|?KK޼'?Eh \ݾ;_ǻ@$M5OD% I68Ke$ ZYrRЇ):oPw9ȹm~E_V+Bdw7ǽPvaS ;4D:7L1%SY xo:=,ѐ'r>Fd/2&ɌlZ|k4> 4E"TD8kzzYRs9kYn`$g s2YᬞM A-S k$XM=,򊺋V28q%yIM^$`AIuA^,%Cݬ;> nk5欞Q@86#g R|V/sV N1ty/ ! O% B,-5$A^^N'甞!J.t48(9gP:75[CY QQHc-+I.^*L< *Q0"/JgU³2Flt3QS~ς2XzV8XwxiB.M%3 AЙ$I+n[XM^NRV#jp>ܳ~wd1nY45p|YYz H%TLEs{=9elsL#AN$s0/sRaܳ|EN:k=$1Rؤ5$Q4|@#Siz=!6kPhi0: ZqJ԰:%Ҕ7u +tي!C$!B;n&.H1Tţ>{-y>L3 $:ŵ '2ճVH|ig Rʸ.@Vp@Ţ8=K8oJAV2J^FD^Oy泮Fn,Veq?:ZAIxV~LyZjQ6z;ZTXX r$wU#-7/WtURG.dXq6<Ac= < 32D``K{Pn~ -z`Pi^w7XrНuY>b!k,9%r6?u&֖#Fԩom0 N~M^l:ʭ%y8 ZhQ/h#]}(]GBt VZn"0.eg@Rd44e(Xsp!R[BGJt@ Fw zОaB\2:n barc%<_:) G+qVdJkL40u0Ca|΃N-N#ndDqQr%c31~P+?x䂀|^Ü˩%#h]XRp`mv~m7.-4 TbR  9dHjc]])}|CsW1vFFU("|jͨ:Pz۪.e[0|T} Y]8]hK4иF3>L8(A vpѲ̥rOƆ#,T-z4qF;g{ʮV2:\̪Řo5VPՌ'T:A^"lw2Т%!k V86~Zķ|NhO tFh^[Q 4z%)@ƯtfV،S?NyeLՇ*QJ PYJP/~k1|7͸+kbS WQYWyUT8ɵV՟oL)!wλs;Y~Sa?B-P/vw˳.QR*R#F'rΗ3,jb'EAw o;Fa1:;Mk#kʪǬ wk9l71r4N,1c[-ۏ;̵Rw*3ufR(yE)"IJqŒLA[QC aq]{y繣y%9h`A"q ԎR+j5OxxcLnY%2@^vo+E`yTbn.#cĕ8JkttΪ *P]ӄ)u 9T #B#Ox4QsÈQ )Vqg}{9ElB!e:9]&;!Isi'.1swy Iq>S8hLh2A, "v- a,ı$H1ݚG/R" ʟFP9ZTT~ @%eմds$RKhQ"Ȃ +~hdsP/V(ɄeB_)sJzwT? [& wxA .ڪ|jTQi;oeȼzy@wݩzg`0|oI<^FO8KeXJmRi@YUQ("$" 7iA$YX*um7W^@f{ +X"BMQMox;{-} Dm <˼PUH 'q,T/ *ϸJ20Ju!E  4/CTy2ag쵍1!ck7PJs>D**vMicy \>$Ey"0A5Z/T| J*2ȫ OEube рutOحJWY[Uhl`.r@ê A@rd;xA(6*7[]_ZUSn{+__% Y,nRvr0ݶ8ֶ.'ҶZ`N'Á9VkOKIk3#׮P$R\N[qhK#/ތ|Ǒi, (*keJ^$3vo!:0k1Y C·aѱ@:1BZk6fMTdY8nGgb6*cx9K/ۦ3& Xg?q㾳ݮ(aިXfަUua8@Ā H*|v Zng X(UDx>ǫs@N(?cR÷O,FhzW;_c4иF3T@ҙR@JJQ hYRs':<猗JXWcʱ=+ DR>@Yu_[CTmր6P LZ ~AK(hUJEr#PjPo,7ATr3vۺӡ{u<Ďf흊^k֎qWp|qyO,uO*L) qs*Y u^Y 4#2mמ#cܮf ) yiM 4ВB# @Ʈ~ |9ʠ[pqiPӿfLer֗" 4i .W'zvU΄gݯf8h !ϭ)0(rXFA실g֫%0J^b>mVwgjV~hB&XLS0(1Eak,VSԻ%L\ip*9lMzbn]ΟZ:-q* NJD˕T"f 4v35b '5ɺ[Z_+Y}Pvy"[h揘8اJS`*@ X=9D ٿaIcXu7=JɏҾ D)l4W-R׼΢hINy;H=ʁ7и ]9xOC+,>.hL]P@>?އ!c5,hq_'~gUT?WY 5l0A╹:9kyqw ~ƤݏŐq@a2eqZ6JҫA2XL妬y]ߕ=o.ӶTupgG )S77l_³E)qE|J zy^*>ŅaJ1(߱>e4zHlp|n3F1qx\)H@_1Ks#*s|֤&O({+ja,W^__|W|-jʪhUXRQZAtuH^Nӯ~z }2^" 'KlV(0Zp wOk!K f jN9-4VKi%.=?DÀB;Ccϴ2P{u[n ;}{a$2 ^fAhzk;z/=H=Þ8 Hxݿlsxkjz v";{WU^H! h`U4W`{U>Yo ZT'#U7ad;*b`ߺhaK@\|gDg&jov ZwhRnsyq`hб/Dj{v ,Lpm=k^ɏq]ɷ#K)eݔ^L02-Hu}:=GuAQq;1X_9@bQLw|4DTi\Ɵ.2bFK[jiy`w N D}bT,F }^0Xje O-)k^i<}z ;%c&^@V$FW-{q`}8"~=8;1t: yG'*(67ߨ_sT0cj՜͕G`^υo~"po`Y6yWj|x}ńa8> bZ{ȏpg0ˢa N֗5Xji5-mX)J +C`, QV"e-%bIbX4yr%=fKcIsX!zமr!}^a9@Ȏa@Ma)9fu9H{c_3++0 ~W6\.'ȴ1z3HXy£nRvXI"}NG?fbhYT@$CP$Sih!'{/,Hw<-u C ޚc~, ~#N׫ƍ^h`[w{$EyInнy,x+ItUEG-HMȄzZ5?_XJRfs^o spTu'IN-w+B@H ^R %d]jPԢ&:`4NR{ y TiqibeJ^xlQt7иG̽TѼsc9f3ΓI^y%ȮJt\[5oʼønwe)~nԍ^=zn?7f+u9ֻ[H*\FMn\bY{4`N7 *$Q՗(yb%dH%p&iJԩyvP%u>@5ֶ[6RDZwب8!J IXp@㫸3}I ʆU%( \:5yl'0]ղ3KƩ]ƽpN}}{m:ǔv#ߐQɮ rڴ:* 4e_*VX4l1)xy}/w$tBynhm::Q.&9-2^c p.Ey8# ꤥtY[boӽ7JwV{. O`nLU-;!hDsf}b+A~ CoX J>)fY10BhNyU{km~^gm  ^9"[{G)udE,ut~ɶ 4JgJ Gc353B<&#I?^@MZ2=;^"(tpQ pН%`TJhUwyGl=;0#ɂb[`Qܖݍsop?7pZdg Ai~ 9%Aɮuq>.,x44qDMi0)U +;i@o };dz\xtHYÞ* |D" ,gO^[~,_w{H&y:X߄J3K&-oLmrKV<~]?x 1wn&GauKFon1bMVkhfi6*YDVh+"j ;q2#1vp_qλ;d ڈ66a:L\(o?>UymH^] g-z yQ&?>@1@ݾ)WPT"[`y}0M]bS >Ax0|;RMnD85:da M pnn^GGSaVTĊijjv܊Ӭj490gT|`62!ɻz46m^Q4 by|g1D=Z.,`0o%KֹδLSK9xw\_ `RCJF9<&xp+7q}vD{^]9aPBfc/)!T=;,M-8+zN֏ (vXzԺȆzc߽la< ߙYqY/6ZT^f?#ʽvΑנj~8_?YNqS& -:-%vAài6$S+z=ye"kz`K/$0'?] ij#!6Z`.20"-O:ѧ]\rWdSʐռ~$;]2gn/b\bU7GW>}QyJ N|kIf=2%~,]S'd{$o4y,ڴ˭\3JBDd20PECϋ墲VՉחMc f UeP;`/]|TC|hiڞ}Ty@Ȗ(p%j3jVx݈۫/9;X-ضgd|\z6 ?U24>h66^:#<8D.cSLTCfa; ,ebk>y{/{0%sM,JXQ9i"tjT) k3aTZ)s~\& ݣ5I˅g ;6F `=O}pR":*,x|;eD2g?ھT ^, Á_K^H-8Afy,IaՆ YPER@2 9$<3jFxj-0G:`F+u]Tn-ed*Fb ޮ'P?- &HQLRR d` t2Oɐtͅq%Ӂzeǣ)gf\gci z[?yXXIe- VRR#%bIP88wTc$Ѱ| .~K"S$Cr(9ȤP#uP,oYK pսu]zق4Ǭ{^NT>TƷbn*5Iɠ 7ΌTG3+,2ϿjV+vW@Lj C1 mUd|"Q}E䴁Ư󴅬Q 4n{F8xyC-#¸8BB,P8˕3KVfT!r%b&pnOM0"3߫HV)/2TR #:DoB-ea-('{"p<[>7ђu8}IP2J΀+%xmq^>|&eh0>/n1<)BQ6ب.% g4Qq8iQ[nwfs=u 踑ej%85Oh qPW h<د+̙U&,sjs2ohfWVZ Y-T9ovw\6=~{N/1 Dߔ&JhZW1,S$DY548D:lhr _b &/~R+.E?Kn<.?&l2,~P&/VX?>)?r9[X3zRVK!h)E78B_+C_$뫫~J:zi:QWXdW |s~F\g 2$*HʙdEk񵝳q7BM]ob\2T0\"LQ*b2pa+0b_Ujqi66и-'3WzrsB#|}AYivFrs@BDeT=j h|b]yjM`IaGKX׃Ӫb" VU5HUh0L' 4mJMl.tOer9ΧQ]ZRT'@,D;ʪ\'Rz 4j+*zMe Jmje;2Ѻ*:y`XG>]T_.V+ ,4Z$$W $ ؃C҆kޏ\8:eOma\"*r]%-3Sf,! 8GDv#rͮyc+V(^<#kElnF4G(G)Fu0Ǥ8ydZpLȦnsrs;)DK$jRY[ 'm+IB'd1~ىדhQMRT(b?-Q wwY!eUB)npI%2u)!4盫p4lJӠQ]pP(gd+YSm?NK9ixy u< "XRƀ]ĻK BA+cc[m3.Y ̎\χ^ƯOv*)\OB.[;%LuOpoj9ҥOӇ݌OA kRY:K5Vg~ܿF{I)`iAUZ^yTI%S\͌|Q x (WK<߂4t8 PzsxgЀt7%EjߝMǏQ 50O-wܿAoϻ1Řa`hdrK;%˶+&wwB%EVK 0^mG (׶{|6*/rdEJB8lm! W)!,0wXپ ٻ]05o#2NGc+]ӭd~L+k`¡BT(ҁydq0^s^-^xKW(XQuk-{Z J#`e׏d.L&;3W>G'YM/()Jd~# FDQg4Xs 4`-&2ꙷ+wGԹ[NnhYD E1; P74S,D+pp[vj[\fO뜫b89?4}L%99T=<a B* `B٠Z kUKb#"3Q:"x *ajM3"lT-KNb2h(Vv0T2E3u,*$.qb+8}h$+\V)9t KPÀI8I[]3E@Jk V `Z3/*X:`ﲶ{s.p]/ZNcئ=}Еv*`ZY3X*P F(4r$r  Ba2Qp ,;hE*E7 o$UIW[\T t`7Qo8O-̎3a$Q3*'&SOH:cBGISŘ  2~*%0gdH# zH"@ț=v_5+$5%rh 1g?6;1uӍ,WM=G╣=BiϛPfǀ/W>!?|/a"^?X߀$yt9cJ1>V':5̼v0ϣ^AoG'9I5=п&K"t] wֹ貤}bW#܁MLG9ی Ӈ<]~)b?QW詃dU64#t2lޕfp%P49z2wҰSbv.&M۰m?uZ29(~Y$J3!As:4), Ө kµW |-.jlcEt[ӳ.-I~ƲDF5]V78+VK9K:|TԐcBq1+R|x5xw>WRmtWU>w'i *y 47)˘oj{я1  vjӿ!hDIiпOOv:Ż_)BMgABk!Zg Zz<,fҴtdj$MO됮P>B)$m@[.dY땞 lJng )ۡme?E!+[1W4937KsΙ( v(T`.yJAn0S!9U—t> M[yDgnQCtR!1A9"#yL}5T D=%;% fhiهT;I 9PczVuԅV68֕sZXE@H74Rsj(zێ>K˙{Yw]構cOE'5>#Zu\x77: q(Q{Ӓ=iҌ}T³f׭0e8u)Y)d kI9s/5`QٱR-72ѡm"q,A^/2%JrAZWVrY:!F1 ar|VpR{0'@7"D C %)7VQ./}/t(C`^> Тl4R3@Hai2^b35Je%g#P#5Z I& m e COeoY λH IQ:1 ]dmqđ7"GR\ XUx> c|:(4{\sدR챸A|ReZ+'bxL$3N `k*c*Sv"f.A=_Ϻlf!Vwngœ:˝lyL!bqw×0arXαfNߠD$ߺUEEp_]Ӥ5+~V9LHRJYZ`-Ntpyihpso[]Vr/ah&0 vXi /q7Q V(KcV!}YEK_[(BmMuZ*>Y Ӓ]u1G@V3NIP@|-MKQ&(ф~QJH[G!v1V-i o=wA{^Ne5Td#N# w`) h҆h`Awi|;'46rrUӰP fkfֳ&f~ FT8 ĥо _,h둵4PL 99E,ne:ocp I;e!^D6L-b^t-`<}C \z_Lg|wۣ6u> 00-SߺT_ .]\__VQBXt$h-^rkPGжoMkI? gq8 =)@O8uԛ܀8M.A.L|dTTgA6x}\Më u:R [`DKeW0^d翢<ݕ cB@r½*ьMl~[!mh`ycRm4@-u-CE)tXš^ TRh^L姗uGz.jaDTA3އTAgm oo5'%iBsZf93͈l%t.kYZiV5umԎ8]_OArQ^c~lAhmբ TFW MZwc}*ႊԀ>u-`AUSelďPI77:633Lr|6Lꖕ-4q.;8U1O0aZ̤>LL2~PVe eT|*r1Lrc&2"\Qfm(`~N:A7&9ЪfRl^]NDf- ڎ)tMBpb1Vzu h,FBٹ9 w!DrfjT~Ag 9FxKW!΁S=n#zdT́G])ٵ]]V_U >t)'}Զa]^|Zu)Kә+3]ilHTK1FT5uͫ Eh9;d^ ]Vb=R6x]$,Ƿi51{ltaFB }֌ 0x vd}| YYH]z˙ m :|q_G! \CVͤ ~\@RkҰaBtI ߥykOhפߍIWky}Ykؽ.au9{ukmH{FK̀rbc@$Ӟ=}K$MRn#J#Ӕ4?$HuTuWWB * 7ŭ 0ѯu{Ax,J%_ٓ\(2g^A=@%9u:, @Fk_|{t5~Q.#yI։z5ʭا5UI[cϮx朑bw\SK'hfq%Asc칥T+t#@?IbI]a8oe["]:sAz3fQ r+'quaڑשy6OLoj\~}ӛkd$D#Lѓ1:CuNUa?TmN;cx"ld os)1 +L):&rSԬcm, y!+?>+'ֲMC8I% YHC:,OƦ{j*}[gϣY eAZ¼cA93b>XCQf_]:̑cB{)*y$H4c ٭JU1Vk÷d!K|&.TNfiLcQ6 {Ʌs5Ot)k`gOi؋,ʢaꡎEܪ6ޓti|iȰ=8BU1l MԲPT)ś/~D&CBPGhc1 J5~܄7>þotǤ *ল m8M%QZh5n&ϛ\vãv(jwIfaD1dwAU)=:!W_kg޺\szJ=WCFJVeD(#:k,t/Nb}%"$OՕ %Q.S }x_7`LF(=0OW~ GT?d!K9]-{L.DGm i@hj@n)&BS-Q!TZϪYw,UnL2<+[ !9ߗ;x<)2 RDf+ AYf] wfFGTZ>H-LX}r|:)Y"e!O%/3`U)tl‰;TYSOic(Ⱦ2NTud1&?pVͷ,{]-zr?| ^‚!)4LDU&$Hzr %wtU毴IRhD5=ZNϛLuKu؍nx?ŪSKuxgRr{}Ǩn󶘑y\weƢ/N KKlU H\r-+[+ԍZX"\xMKoUrnzJC,#zhrXjcɎN%HRd4~!SԲ$3Xn`N6z>)XDpp@࿔ŴCŤu%(h+G0JFW/VW!7«*}_9tZލ[)Gm >1iU,gT,?EC޷VLl.rwg"UR%K?+:ˋ)å2βTNKH9X#8xkXbwyt7V4  k.kq@!?W?Z_Z`UTM vQ}0Ih82C&.'tǫZL:>0dEh|co\73O7J(/=;pAɻbx=ix\s]kqKV]ՙ]*3\3 r\G jg1*֝^O_)cWñBMx:Y d&oOrna:ֱuc%oe3ֿo i:6؎h5ߡ:ǦðFM<g_ɾȍY[}8LbKVY1,ӺsgM9"68ɣv4՘d#rJ >Uk}؝ R%0!>*e菰QڎT7,2MTI|3ufP7媏A}ŝoz`W]jVw8C0]@uSV/d R{,zf{|.ie9 >v{{ǰc^E9)`0nr ihlkkEwi콘q,~Ba!IkdplNmcPQѾM:e56ˑw {M|s)4qwa00n 嗏} G})eC7s ~5UC{@+̢t?Jϸ[Ky(0&X\,,gv pշkrv5Kjb^hCȍ?[/N}Y~I۸xQM~8%^{Ll]>MbXߢ/W-mx7fpgԑvcz$%XI~[^C*CMT\yikѬ#7dOI=v*v;QM:]OY_wƙl4/P$&8s*JS=΋]<,穃.-cϾ?#hMMi+'q>D.u擿z?:zw3èfx7 ZqpڛkĆJ 3BEBq-!:xu?!_،[ E `UD( uU$gPTϼQޚh!@KcS==퐩Ռe+C ንv,7=X,7`wv +es'H3)7hJcNHBUR p4׽F#D,$sQ Ѥ"H`D(ymCqNpF̊gYl^Nf:D]A7#4W̊1]!Qcsj1T2r`(PkekfBJf}>ݍW[` {7K#^CB*ZUUR! _UND]9FhHIZy"(E#U1&\%FDtJ d4'uE#\[xCTB"iQFWߖ"M7hIT|!T'RNj'W `%HjtPTZDI+>h+Qh$i m%\BG#f{z%#a}|> /|w/N$'¿E݆%6̱k/]|aSxR;N+-i@ 3'1Gܙ$K-X"J,1 ]δ")FPڒ!"dxy.⎃(~j3IG)*r%|9C3w1CFñ +߀+.+ki{W4,juㆪwJI "V$W!LCrErz'&XJ=A 5u␔eV]\,"ܤ]T=.NHJ `5榁CLIG0qBxR#wX)|ZK%'(պLҞ1ouU;,Şz3RXq- Z@ "+Vά88oauL=^Ei+̀iKAi=Gq̹(0L2R8w ׄ!I+ xv 4cdG0E%E0i2)'eȞ)e*{6K>,Vp6}|VR'SE@A5ସpfFqQĩ@50^)R<[kJ^ꖶXEc<.T \Ժeu|sPۿh_bg_J$F\?ySM/[g}6x=튚"i}yq7?[9[\;Zׯ^QrȨ:$௎=Y}dw^׏>~uZ,b:[≼Y,|} E#a_ׯx!.S}Gvs ]\e웼k{ZGqo˳d%űY =]PZQqB ]}[$ش!y)V3ϵ*+;B̢8"")-(ǃ`e BCSᡎ;Bw Muϴ' 畒2HUGHI.7K 3U: C8P=pX<*#.mb(TpI"et 5w*I5n"v֥t}~|I{qb9_VD:CtBYq!6ǝف>3v! Viy%M1<7e@HءVT갺tr1Z@wd^YjLkcXgG$u. (Œ) ko "!bV!@zm%ڷx*%^l@N6j}C?~}w(fg w70?) tO :F[_+[ػ$W_VvfO3=RWP^^rEU N͑,CJEd\0c\Q~?i!E@GIaRK;6 TX"ƞ2_Y ˬ/ zѶ @udҜrVܒزK^\Vt%C;;Y]V'IH|BЫ }7 ōu'|B?+eWy7(wtk BZUOa]tF^qx KNuKH1:h.b[Sҍf묫xvJ0g7Kc* |͹_-)Fbz!Bk6Ve8_(}sL{M tz>sCjru2?99JVM-hLdFY+ IMPY ]t%`l𺍣0#暝$XOMRVcNaa'RmvrTs]KTJp%&%7ri4$Ƨ@ 3Gs`oXq^ ZƱo.Uߛٲ}D1W`.]6ӕs}\[{^ 4W}E>0;>_dEV$Vs]HEŒbƑ]w$$ehn'TQ4RLT)5L*ٗ([ձv$.G\&XTd9. *D?_&5S+ENXFcQԭXtN#HO1g5u@ Uch봝¶J5XSN_E 7v 89#%Lk7ǷԦt/)'F쪾4pBCΤxk9]T,kq$e''weħEjf & .V׋(Z>齫o%8wP=i=ᝐVI8h>+zYr0z#}i{?DweHriPQ騮; ,C eoFܧI9SNedЍ8U'W<"T _A\"&=i-No0k 0GI>ƍSOVՃDIFF8zj\;K+n4ZxE#hGR@TPL)UZlsbNVUx5"<3zRQ`R2vD%>#!&VvNJKYUkcs0\]uNXJ{d'p8 ǣOz!׳EWFhVe#Fy j`.s:fmأrk66rqMBQI!MyīvDGZLұ'Goã]L;5qpg她IY2ԒJѴ"`Je T@~җ<^hɠѕCa)-Aw6t v}1_8>Tn`ԖkZk$8s6S fq(@{[RVaO[l\T4aqgw|3:ѩ[5:%}QoyyF3:Oq.΍5h$8ݚSK!ঘq 2Cmo5aw==_Q]{:3ΡCO~ru]&{`3ݢ?Ѽ5;Q&nL~$ȶp!t/O_Ry{u8f~΋y,G\}a-^vF #~?j έ|?Sޝ^*:?_,DݛC0ſk_/I:2[h׿y$?.OOu>^-q.~8vy{^Sr^<ֿ sؚ.lz.bͯ&޼ 8C]FfxN7&<<7ys S Se;,Ө}p=懱Tm1LHFCmJnST0yT8Ö! nr&߮/c Iì~y~{ɗV);0ql0匇rPAN9TQC -U ʇvQ{V%O-5%jv8>UW)bͥ*Qs/ k?KQbSE0AJRW=htog.dC?_!Gv%)Ks6x]9͟O=c֣'#{8 '[L熒@y>oA:9|n|_ dQO <<8]( /tm"|M ďcJ{l#sYC)\K C=RC({)BYuk7zEfhv30IpUCo;*`lG8;EJU(مHزV~'Z(Kl v掩Shlx.{l<г}ch"챝=w>Nx}ޥoH_}%EO*8'_'X jW' NRINbQ0.n4Tm( P57ày; H ZTj,4?[::}b3U-يWUDUw ༪ |Jpl旆sn !( 1? TPl+C %e3E~†nk -'jё?Y+] j[D*mr򦫑n46yO#aš%l.WbfT?`R`U u7[< 4K];výcgwmfMp6 !g.]yF,;؊cΦF;AJsWs=z̏LFYօ̻{2L<`a7em~svN7m1tS~o&->q|#l"<!!? [C !8Als40׽?/A(lg%ϾW{W7+;p$I]$\ZMPo-*vi ]UP٧櫀x+@\v ;'AL}Ф$bT`=nkFM;0Vdr>7,)4Z5n KN}zɥH0kĠJknU꥔B-ܽ'8R{()KYKeMM&6 Dv2*cZSnyOjh95A\ɇ9[neuqF : M'k]p|5hBg-9 O-|;F22aSl&(d(M%03.PG_+& qQ5lSM7'޹KR^Mq ЈeM(~Mb"1uGuqJ˚PLU G -X$Ds5 vNJ(*X]RJ/ih V01=Fଅg'#ɪ8gGav{$c'{TW_U?UQ4'x ^jLyA5)e6I$2%Uc1HtNY-!8-bVm"5fI=qe {T i(HcFAZf*N-2}GIPr\[ǣO:1]*(bSk .(aҡ:lsM ks2->ȟUM"&fQLgKDBz?>)FӳEV T0eQ3x (0* %*M ZRZ-vG#zT8})|LjU;AIDe%Dvlk* *_ xqyTNx "k֨&wC3wZl5*W1k|9ꊲʗ 0(6*jV "F!ElzCjEKFOb %v1b/|? E BI97X@g]Wn@śo .._zK5)_ڗ_ ^trqPrN.c޵yؿy+Y<,u1}Z ڍ؎3Nbg{J}SOqIT-}bX^a"G,%%2?S~nD~حhMZM].?~۝КρϐݮDڻ~k~jUzr#,ho;URU{"_(w~-wҜe'e=ơ=taK܈uy 4ᐜ}0.w)jeUFbDz9SZuI<);EfбІ#r؇s`ry{HV\qI|QJknfsV?{E_ՠs b'9K<^6?HtU:~Y(9鶨ˌ7>{d,8!tS, u.^_*l/3S!UskZ2O%fN[rUYЖ[1?I׆u!s$"`=jXwl9fmR &wO0fuՑXOƒ]3qNoX78H%8t{(` [#fk{v6i Dky%e6SSt$RՑ$2cVgZo-M,XiPHmX7s\E/? {ϧk9TͼiIj&<0`s}i&t3yK&'xp[InU<{%gEJTxƋ&ATJ5S Mwjz/ *>hSKo>\yZ4L u-upd%QpzStvQ ](aV"w\j7."ih5|zn{3ݤ̳plyP`Jrgt< %c\ԕ SvͶJ)d,MȽX{7Y`<iF8{Dm$z?63(O'['Cx 7_0TR$զçc_65bfm{{/'[1[A[%^*35*;Kb_츥pI0e ylJו,n9 m >s#/]wc=/sEjOEN>zz01Gm@q~ҭ1a ԕT |>I>Ԁf̓ qO|[e9[$}&~ZKvk2_в1OF˄*lI`{kٵ$hYp..ӲgGԲ.T'<`q1OhdER{[QFrVϹF\#kE%0;\k{>p $}0irxJz5sЖ+'}NaC[m闤GlΧnkckuyڲݝNx5IɉZ(|{4]څgndQnDQ%].0k]"~the)c&bt%[<3;}`!Gvl=Sd'f4O7ó199{Jrgrx#QWs{ R]Az,aeԾ,k#U\MlUaOxsox4uc;jВ̊d vxf_ñï+\qT|9ؗD}+Qy')sJʼ:!6X06kM\.H;g-\fӁ)30ڒZɂB[IOLU6$П< Lc|N!IszŹ49 v&z){XLZpW## A)Z1l}#O{Gw@lzjwHd`C>t|3(fDJ119@+ҍ$|&xE7b876B*.{{a0s,sxcy[W?^:w׋1liXJ<.>~Ǔ0C ̾tގgQ:>[s#?=g`Tσf?8xwV ž1C/M2noyn0̥RKܼj{FODgh"$m>)6ƟazN$Uq@f :e0臙S10>`Huqon5DIT ܭəXv_bsUT$՘<96gmm6K1Ѡ2k}!|^ Q]NXd@/.gr{wqձgf4=^_ TkHJ.JUWZu3ư,l{?hpK.ͶB EBat{Ict8%tI6Fע㧨 Z̤fciXB1C i0)}|>_ۂl%y"r .Xy7cLrJgIlYn%3]A&M )F\ORRjuvϽQDW*`6>jͣ>"W`Œ#&aU/(@a15(Pwüf,{M!@M$rRѕΥ Em pAa$p)aS`!Khh xt9ߏ8Ҩ?"~HǮB=$`N9\;)n`!s^tx>՞Z᧝#Zm}\omt5P=SZR~Y-j_@k$[cumŒss#U,l%)"kvrJ&eTB+ 1yK'fPks7)_tUeֆ<.}inHHM6<ҰdҸk WUEH1|x>-;i\&N@EjmRplU#Q(}֕M\mըǕ!iA[kAR[ceVKW%o?u`đ@E8Msib,p* e"AOWz9™#+XIQa!ي\VGGG;oqE R̾z ,'z#pI1Q1.)&%J<80ʁRc0:Nt2z6]m۸6?/Nn7+AN5<{H]-uk(9ū~ / c[0;>[.<]rXM{7kXEHg_YB0zu-$)& G`1E˩Kw<ٚC]{4+ P5${+t"C]Q.{ԺW0`Bj Oln=j٣T1GɸeBz;YBϕuTLɆcPvpqvynZM3zٝ9r8`cdЍ:78̦NͱOQJ;LiШo{!+q%VOI嬱Cv%dWr l (or{9P漵ִ yVȗ7OY{]ʳ=P1 MȳjCz(ZODU ͚g'ǞLhτbA iJ{!k#qPe$BlFo]΅͆B/wjIZ*ZJz~uBE[#Xr6%| v{̠RAT3ȄqcH3rS;}cO7& }򍁞A6%u71;]pw ;%tx#>{39TPڼzތ=f`'w-:ʰ.~ux1>{WKZ&o&%mؼGg:]Ί1z@i 2EM#&lY.a]P5).X5On;j6ƅE-3\P4> ʲ/HO(E!NɶLeV>:Fj Kj ͈*0jiFeo0+5$.4_]=}S^"Ē uHu`aZoEWuŻi ي1o(RV$_YknR9O!M<Q 1iPui } .֮Jԛ,6McWVѥ/m\ Sx0Evq*AP0"FKUкHL]Wm"7?SuIyZWg_1|5K_JeR*T9C1 l{ף0z#8=~au*ECD TR-*A rP{TZ˔ѕ1dkkm󺩈aCD72S6 0ՙ=ׁWQV OZM۪֨UI #xU2J'*rbGNs*z)DPM29 ߈~6ipo5ܯE`LnpT$AcR:d] A?'_ϩ犓ߚ|8tpVO}-×}6G}jb q ǹ?\VT*ؘ@Uaih]_Yx#U F kRεTbc.[&%rkW;XŘ4̠ hqu}:rvӼrp2XUE"e -*"P(C6-)4 ulO ]mZkXIU`bn[,fm8ǴuzIZ?НlKnnPխkpĆ꺆٢I.YY!6Dݾ$WuNy*?ɺT2jER:}ņ\^ J )M X@m>}|zrcJc@jC8CHMv/.xh\Vj]&^JݨYeHQ*CʩX=^l G[OIGY+Ɨwt5=XOxMI =#"E"oMt'OHĠ1!7jV Q0 RP#`尬\T WjXg˺b#;`ل&Ch|]drA";`oL7S~JtW8>"nes~ms1N\s&(TҬy?0)`%0㖄Ǜ[Hրe\?5WH{'B]o?&Uk̇kz'{Lsqm=Q b{%1NqfLP(>쮨^{+?<=^=aEׂkdL] Hj_C5½B^]b3P:j,(NK5ϦO}3eLr] L{g%GG(Ðv$Hсv:zhR#)V.RV¤6zV!薌AY&}JZwi?dETyS(\m2h*J$8*g{5"Y_.>|< ~b+qU @ʠkV)]Nϑ|kO]ń9s{*T_to8U XrDΜej#}aT:t|QeWlm;"rTw=2f'@k k+>n@gУ%M셸y{gA҄4:?_p+xy5&V+ޕdlf2}̪[1voAVdLܬy6l8z&BE fLBrl'd GX59Ĕ+Y]'Idv2'_rQّNs,vgh[/!Z (=ֹ04};# -b/. g˃P] $ BZ5; ( wxΌ0BZV5S#rjD?b[ >[IKj: r~xvd-HHSq]3 {7Aī$xy#s D)FH|٘ps>x·1{lꯉ8#YxBx\3`/ =,`O M1 \p{a-GS,_'&?דoe*V$ /_ey~۔yt<<,BOc n<ׁb#-!GGF@eeB?w*T|Hه Wd!?{Y`!f,Ogk(~1)isVyx,x&!}lז7`\|& W%],;~M.MWp@Ѹg ,Sy{}fO 瀥3< ?t#ՕO'Iѫ(gM#M룟e}4}25LX^p&@6ѷ-U׈u"|Ony4Yӛ5$`"Vz"Z;`z g&'vsa+bB"Q͞c5,'An<BFq|0td\n!So&1ĆXՀU?^_)˽47͐1UãC*'ZTU#PZ/v#kwŌ_R?2$2 eֻ$kEFi[cZhbh[mFCن`IX?{Gr b T+##|xsz}̔82XF6_"Y&y0![]_#>_~8:i|[oj'=mvu"B4);b }k4$ۜ|"ဝ.Gr2i]dl?]iķ?5fs"h3OD>El3y Ar9j`Bd9Dݢb \Y]9L7W>2fgXwUORwqCLRÑ8o.Y'B&`.KqޡG<4s&pryfIa| ?SVw^JY9~[3q۩)o h۶/QrhSI{o=UX [PvٌR 2 A4lA"5]' ݧ|أ! (KGP w*emc|7_r"W`Z5{GɁ4^`']铇7v+ȇ2EQ>JvyOaB1Ո _Vɔw%m{;T_E<ΰ T5z%_jL"xȵV^i"17Tw^!O ȉ!]tu^09D, ׳tu9إKi{[+6cI:G Տoc" ΟSmo޾9:gvo^ Iνȿ{q% NAlXcل3ypC#C#Oy2Ywv>f{I~}5k 7,lO0ZV(NNFَnUj i9c Y; .ֈofCV&Gv-92F/bR9w.yE5o}Rz>5μV[ЧF%>%QND1}5^+"/|:XH8O+ eGQmM:׉m®Ӈ>E>$6[::k\0VZHIZ X "(CGAӹS4);K!EےvjgEV0 zO@+Dj%S씑?xhѬu^%]i$L sְ^VS[msmiXmQ5޽ G3o6_HzY K vXb9AHSѠvS6rb@fEMh51ϓQ~ (kdn(,w;WxN+TK9:!7|δSy4v#Oph7=n{vliho-ڹbWt=RaWvliܮ \[45d:j}c6nl&E0WGerr^mwHߧ:+c9ӎO9pj4 wjA*fkiU,0oJ}Sz{{0u6+FV mkrg΁VAD2ZZG 9dGt%2:2Nm@Fz!2>MHƐM0&gVຬ8!c.th@P @RV]lEČ-t֭'۶dJBHSȈ?s:G4-Qw`l@DmuC9vMThR݄uV0V.2Цi@N:R)u lz\QN!u/w AxSєrC4C!Id6ݲY#]c_G$Ft[LWr]=o4eJS4eJ_ҫ,$pF ۉ)J&3䘵w5"ʱDȽ<;~t'&ijK{TնE%0Qc' 'ѐNpIUM /b]1h14x9emM2.YBIroR~N, JTO;C'/";jouʏ󚿫۔t0 |ΛP_bbi|K4%oxvss|~˛ 'b|hx0$;E8?|>pt|;=uк jۦډ'Ո#Nn$11 `}Re| Ep*]4X*mtDŽƫ Zzo ԼwYV(+`fquzYckDS5+096/4>M(r^i,kbݼ-a7݋=30gLM6JT;5[LVy\UgCS/07'cVg89ˤ&\*+#]bO+9F\]9a97cy#iͤ+XR\ݣwP9{UNyp{TݣG48K=4)J|)*X?ƬjXu͠bjigJQ!igg A{oL'jigGWnKL;as T/'`̎Z3YMA|3^v|Z6*4 kM)$ :lUqĐ}l[8"QL>jD ʵs&w05:FD˴d2:sZ=2.}%x\8{25'~7M, >g ]g.OF^lGivC,j7%M)nJuSJ~iUW:JbI]kSɷ(a|VgOX -L`z)T4ʻ:w#GާX~tG2EP<}#V-=)$[I^vnBW!d[NIN%0DQcoECr(pdغ3F-<˶z c)77CFZm(R8B o Bj eMQPWsBE. d)*Y$h #fQd^2Syfߓ}Z粮[#CT8g  E< -8dK30·US/?s^zυ&;ZM YyڽlH"rhQْ^eABE *81$B+,ˡ/Hm@rڳEF_TdKCuCȕb<"' e>և:8,"tcyG6#[=s%x/G_8U]զyl#|ścc- wCΏJG+.U-d'!ڒz$M|jY`)ej_D=:LSԱCۄGeJ;W Bэ̕ 帚s!۬fEӡ8H8P#D䜙ljo DآT:}6%Y4X` ^#T-l¶q`hs,bu^VPLmw ^SB5jr,1RFJt,ԵBюk<=!KoZ%%f6śXv*3z&>rn9|-JN<,iqܲ^鿍N,]IX'](.G250+^Elv[8;v7}`הC&}q}+gs[{n\(1W=ցWɽϞMkI6M蔀bv$^:&m(즶m^ڬ"V~=el>{[\bL]Kf,4)=4X&6>iX3kKG N\4ӞUp;QY݀hdBIM[r$u" ]cEVcpa/I\`tL2WO;Y{}LV҃'Y fɛphoHQL-dzbrCJ)XƒTY+h\)*vr ^^'2a>I$40rbϪ|yv9N%߮CęN8MǒJ}/ ŲX?at94Pv%& T MSfjmRGֲB Sֿ=nv5?3uHug>;_d/'F\Mn'v:XzŲ.m%N_"9{ U"N}'<4i/8]q{0o4+h&J<5h"6 bɧnp ~Mߧs mS/{h=bnIEݘR|o,wm@g^V+M1٭݄gIuoқr)^?e J0`=VTR"q Mg,JdInxc kȞ(CBΫ,51ZSxBL](P&B$Xڵ҈IE",i,sFD%̖%H"-N.rj8 [j'%ƂHzqIBQ (DRAQ'CT U֛ i !T'CvE{L JME`Sx)B%*ME s'5UZ#̃腢3I?:]_hҗBV~>EVT!ARX+cP%}.MhaS3TA!NMpQJ陲<"# XCt}S=w6sq WLfjJ%\?giծoNty7 ,N!-*m"4Ba*X VC14$b$pV^Jid@RuD[J0%%|%]? ܺP]Z FPW"* \ɭ|8˃sDc/ q,FGd)K"}[8HJi*TiQՂhuvG,GU7J򁮄($).#M 95Z xS9 A"(Y `6 '^4Uv3: \ dW¢ il݃Mه!9",`(F[h䨮Z΅uʖ:Th(FabAQDQ xXJf)krNP[IRPs—+ў ʖGr,oRQay! ^5̿]{U]o8 }Ҽ^W/GVDv+?SɻozЗ ozoV#CN~ZEyyly%jx&7ۥYŗg]Q)fjT2~fE4;&I2G #҈Q&3p-9WB8;P5"7{*n P 1PH dqej }Dkzz5=F6~Iu԰q/IָbA5mJIQ*vVa}ZyTM:6fy]RN~6e* pԽaj(<=6IBB 4gTf6FbI3KM@6E=2 U3lxL]"'# w{*rPu BCQZSY6dCMxJ3W2p&Q2N;# (eH/+Y$Av$aDP]680EhE&ƀ4L-_0s5Yqe/MUT\٫A+~q6llYVBgmueOG~Gu}/J:s\)i֚LI b%w& e 遃6EPQ/NzhFN<4Wzr]RX:rS-ƠԣCz ֠ ʰ{FpTvSb"o[O@1j{jf6-tv@M;ucsr$kPORgU6B(9pܶ)si*-9BޘS!K>4sNvJDnI;3!&]b}4/m[tjåJܹ7pkwz?=xԎ#n>fje_ 矪>z-Q]LNg8ʽV({W}` CuIޒsP=B?S`?,ž~ փqҫ @uApJC(8  `e*H0%D3*6=I_:XDJكԂ1ǂ"*S[  X$MZ&wIDw)"" jID17bt@P_៛N PCxJT%sqHf)>r+&kx'0C65y4?:BX+$u|͐JIV5mͷC(Ohi!Fa.V m:RJhm~h|,LsT3&9nQ<'倏ڭr,骥[W8<<v8w<0vw}>#(;L%vhBzn\o,0\M,)lֲ2'C8Ћtc2'8;Nȡ8AK~}iz̘Rx|lNZU(*ұP"X7Q!?ԠIƼ'A*N怜wnۘ"[@솯!p*?zϫڻZAO#6EK^ zӅj`y=϶s'-nv9(?n樓o۶aL3%""E,G|1PDUpۿc@7p IcIJ9~<$R{}:ck+ҔRVggOքvt]ε"[ >ag=G>ҵư'x4qb40GT4#L0:l|Z^q`gz~1cl  $5Vgf< 泍u"?Gc^x A2i=HmՉZKM*SMc(wt.#X_!:tY0i> $uÒ?*^Kkr}6B^a# wcZbǪ;3! uF ;;I9v! ^gHP ;;1&T:!?p5Y?/<*ts) p;,W)~8_Jc={5;ݺo+Լ_e7!'0j3wĜrxaw #0 Ӊtea@N&S,*I/SW\ _T;;sV\Wsv f޳ӇwY@+;JG+i!mއ|țxlx1 mk"[Iv.俿nYj]nu*+#f0,6$AhEO ~eiO j~QZp&RK dWe zcHC@${AvUUH ]1Tע(c X4CA/}W*_rVFVLFZ+ɺ w=zѠ{v(ZkCEj8kDTY9)/+f!' {uUGa4Iʯ;VX誎/IQ+.1U:8ӱX8DB2 3A#8 ıqbQ1JV$ƉSFR 4A<TaiRX-'TSZ=pJk WhrkVCfXK<^wt5V3ҜZ5at,H[늺xiQ-f r65,Wx1e}~m=#ɭ G&0 qN9vTo;!㽈>I/"OA6U1 = {!vyir7a~|wu~(6ᅉBܻ!+}DIHr;'ļ,rAKb6ӇPw~48QF]Ar]|JcYw%trKmѻ:,a+i/*T5R ıt7nivhIJXGT\G4{(oFCJ!]&b^h>$IcdnG+i{qaxx~lm}_ֽgϛ)J)KFR{桛)H_*h<]\_m8\VA]f FԡL=( ̠ՠOE4Ia B5H՗5 $H%i$%9 9TU! m>1`AwB;q(K ^NҤAlD Cn|Ä=\oQi뙅7nQ0)ڴQ5o_<~˘1J3#9PmI!8^; q[,UgOh|gr!񵩃ݨ~~>3c&L1JQAZ̨Hc"p&-L1 EdYq/9,A= b{|41?vK.X ,7QQbןޢ΀DpUajmOamw5ډl_ zm!H-vM~?X(5͂DԸaZyQi[NT"1X84AYcČKM45< ǝZ!DIScgF`0ªVNsdr1~>A:&aCKUUؘtu^t7-vfa/?(:Co3נf7`4>{M8m OݚYgϦ> JM߽4S$+2%Xj%J "|R83ƗmBctJjF52Av ":w6b W"| ˸ +lxZrBEq~BY&|㉟4cU%e|4~N-ϳU鯽+/OUA+s3xUFR& Gt T B;=ZI4 }Ni3W+r|q{,# +BEC'|ּyk7V ߱Cԇ] b$n `t1o"?7YWhc=]CG!;6Q2?Q^D|8u1UYzV^{Qo}&5SM3_24"_ ~Uke,"mN\d1g&%U8Zy,/dNC6sZe-X_o'Jzm\ƓÑ={G2!Z,L{׎PhC(ځ-07Vs&T#;ffylH}n1igxs䋉oVdaX6QZ³z\5 kVTItw?IW"EjE5/-.Ս:^`I+-k׼PJJUJ. U*0\##3ya"!R,X!GUbH!–j}K X 0Pj }K܆#5ay>~nnf{Mֻo#/$߭wzwѻ_5HKu7RƎFIi'FsAc{9u׸g߇Ku_X 9&+!?-n_a+)mK#$5URCsZu1 c7n˱P!A2c@3 Ox4Sa^}1 Ztw쌱# %ʵ%G yP{^ y ?Lw 6H e6ꇊ>nr 6P\?Pj BVݝBn;@ѣg[gw2ݼ7rhٖ2{ĩQJr qF eJ (9O}B0O}~\ߌx!ʩ)gq)R8)g !yiU|>0Ttkk;mq ķM#!kXb`..[RXdy'͓5OS9@Fw"avPL2p֏ovzD3Ke\$)lY: bquxvv[Eaw\] V ) Hip2O"$*3MkC"mbPambb`jbKd\#DJ,)NRxDgdc2HZ'iT#]R&26-eґP&P`h̐|WJd3b ]n1l ,ɓSD(B9z2TBr72tWA P0M80(bZK#8+2?n]$bـmLY<}!Pl, !R#;BQ5FΛ z1cGG=XLGE7H9((&Uxh%qBfzxXvԖDʟeP@&Cq(vT)|L+q 8vd(`9%x>E}%Vkm, "5JiaA]TOh9) MK&rҰA$},!EJIs*CAV(m+yҡrrdJ [ŭB0o;%F6Q.┻9dCyӯFeo 9s7DYсSyuKJPֹo֋E.i֋lRu~Dc%ӔjH|$iCNT=8e&a:Lyd~-jE^dX\HbS$F4btb)ZB2% ǧ[Xd9s1DYJE`0BQsLܺG%I4}4ʱ0WCF`J~dHxsvDC5&=cQT8Vt:?͌&??;=Ne&7YpW&?D{^?ޟ_0e@NKNL<ґ`I.A`A?m5e aBoLۣE"K#<[SwPȉٸ75B~H ۻRXTb-*ͪӜ{ɹ9 ;*_KS"VY 8 \T }?_a>FɫJLhF-*6^cZNɎ=avkzTM<#I~$]޾IH4"*~Ijç-ԟ!YBv*NSwe5@d0#F*#;%%vzS7*XQH/dQ3:uŋȻ'#cC>NtNۈ-툓~GsU?g6y9ChQ'PߎX7ϳ> k}c7?[q&Z?Ϊ?ϧ{O$-cNĤ{Q> [;og+f鴟acG?zv.O1vnPl0OVN޼`N ])RHB.U3F;,loc^伷h)[U0=C|<f)(ŌK[{g==w!!G3SXr2)UpњNDmJj|S1<Cy6_4Ge*);U/ʸJ etDâ/o^"i[}1dݩOM7e&=\^&NpT-:xVaxT`$8>%7p< էXe7߇Ch{Y慠T@z+,m6QLNX=c \[+Gr1$ VT)$؈&2X8hy9 ȽbT 6p DLh.SقF4p0pu+}k nh}YLpǶLq┤G8!YFVDR4R`I@4q9B.&IzkTV`$մL+:~2R%l6mX gllCu 4V:fr2O!J=ox.SonTTxǝxYΕe"2-ZR3S_ "9 @qKA0d)KΘLhS .yhGDwwDUN b9b ?uV/vM\l2?=❧7̓S.Qz.<.5I/`҆I% Cx{/w{8jjSҞ 3 LnːV/?>xc~Эy]fk&.ßfE~yY7k_g1W}HR]kX. ƟkӀϑ]<Ŧd+dhH u&[XX_fme7//vFVHm%Q oժΈMfG퇻$ǻU RzE/dfDR g/d Z,:,+Mx`>99謩ő#O^msYΠ Pu~yr"R uL[֦y׮<7G?MiCxP^xT`̶-:U7 GSGt״1ZM_(#|u죤 WU} ,zţJg}CmI ιp01 M,U(*c(t~Uo9s"B jlw<o-FLxtmZ& &(0J˅*x/=43I=Z6i< P@dXDhTԝm>K+n 6_S躴D*1weFG)R4Q0g`mkbd .;we^Il#@614pO(",QeWjT2wf<ҔL}ݥ9Ñe)7SOjڣݗ'wY)!SrvjY5 Ԭ.ikxRzS$U|\d&'"~$Yg$a;xdsgZ$22(l>Ei2U+ WL4zԜ A%wkneγ+ֱYo]̤N?_{s.P`vy f3b~y bLVJ~Wٹ?Qr3 9ju'|"U\G3hv1aX ˔S}U24{ǃ6ԒRP N 7.t&%_S4.'h"/c7?KCT&Z9PBEg#\2K[PE!40EŹ"I̧)XVp'pPBȑRWꃕаZt+uJ:3oZ%9Rd?ZS3Ĕm8Βy 2R;ĀQ;&pYH ^DRu^s ]o+"YdC`͎ O c9iB.k2i5yv;b8zoK+.FjsRp ;EӠkIqJ'@gx* jdKաc^mQ5[Οf^ҍ z%eiZMr#.i_dqנw vsKa-Gz8XJRA+ᴑ%=T\Ne{ӯgͅn'|_AN[wsiwv>>kެyBf}gKi洛G/mLY'eъGLj9JKl#>d=T äĖBgwŶ}HkWX)i8sBvwX*Sz6$8*CB>s.Sʒ:y:ߧFGk=\Oj)R{ڴKez%JJYTB"_*(4pe ŕ]b~xzĤ6xne o@13ƙʜpmfld}4flGq‹8Yl_X.Άtn.p=;1VF\>GS/4NQQWBB R<3 \7:ߟ:*YmL&ШrJn[!MJkfEh5E+I[&?*p5%6 - F݇EZ6B;ukSvuQ7!m ZG4i Iإz$Ѥ ;$Y{ݓdhֈTJ"F!KɫEg?#FY=%FhV`胁o}ZC `mÀ;iIk`U9Ys \@;ۑ.<=$~ W7K,ugI_ܦOvr+3 )~0ȭ1x 1جp?I9ݽ}ssODBK:mcO͒AkS:AO!D12n*tqT0 G];cub]N5s˶.{KZKUC5AAZ#vZzTSsC3kmF/=|_a ~I0`w=Nlˑ_%ۭG[lj5ՋEV2FkUX̚ԃlYMjsbiH;F\ESZsnu:/Ex."IYkS^|w("w>eky[URM3eu9;_ޝM>$Л'|L fRC*jkc$+㙀I#<1n乏s;.>93//7_ 1kq#h1~<P_xB3p΢Q<z'[MJ6=b, BX'v6툧ވ0i腆jhOi8C}y=uc9:5ET RXI)e+X6!O%w؟V9<@@ JJ3!Z^9QI(tL|7/VnǵzZkb>߻U}X%nj1P@dNǠ3;6ʱߎf,LSS` CeF#m+aTC:\r3R1&Һҙ-l!#섭 c="վKXv,// Aj CH-mam[ھ ۶Vs 7kd fu8İa/ zK*8詟/?E/H6UCr%sr# Fn(9'b`c<w=ٚV9`ZN@|$INo$5Ar0c#NPQY+PRV.v {p 1N K?iQn'Xj!?KŃhGΠC%}NmV%{p dz?d%qKC;.pOVT ZN^]RqbLŽ~ U^}>>BRˌ."%)gx! o=*.JtFMZ\k&(.1|\T@/7̞b?&WWg? (ƀ<ұh+-sַn~XfoϐHū 'Ï/,߾=enQ ]=rnGtb:-ү > b6Xb4ˈNbCt~lpl4I݄  :MC&- SR2# db:Tp>{}r[ZIV#H"AE3\4ϲwߺt&oׂHP%0WԅKȭhQ8SSp ILe 2ߕStYR,ENW}G@7Q@` M~~O NJK*[˝Mzx}/t7>UGGqkRwvvv~}>aPg/^kPS-"Y9ѥ̕$#y./VY E?:C)r!>r Z,_&_ f7Hr4Qo(-Q<H-)MF56a u0c1mV&5O% Uʁ-OR0׳ ZUZG_D6-Cd߰4G2H[*}?<9훇j̫WF"Cgzs4G%33=拕xw}}u{ ?!?\o]Ww+"~{{ G+.Oݼ+tbB5EEw>q; 4)SO0*  ~`7՞kgl| FGf'؄ tk>{Үw2J(->Ӂ|%#YvdF40`0f\h\Be}(Xfa=]" ^7bh&)uH])'~L;Ej*($t0nGLD!*2Q$rJ)'8ePb[ $Tl,Ha76͑I&]1aJ!&Ut хDW{*cSt >l&&)Q7mɮ@M@fhBeȵ>65AmC8"wRJ|}! *cفfA5CYŒUH>Wy6ylH¹K7 zc 0|4sPPL'^Q\p9K*jnQ ?-GnQ}JjQNit()َT]<_%շI6}{:k}8]S) KS9njsTЮͲF !%4mݸ>:s)#72F'xl(0V2x,cJq%6@'G;yn,IJ5?`=ӼꍍNя1rݦ=u?_xUf;.;Dir#jZ Ǫ.+B2wԅ֠ PlYXhHe)Jk3 PY䀌V]Wl聧TOْZ+ThَN;xl# "ݾtsAѕjM<1YL=eʝZpR;Tݑ2ӎd.CVȵs6h0 "3KkpKjtC6e6XA' ;db!ݞ:-TuZ> sڟwn+f!c\{C&ejaeGZ;#VbO?ÓW0ƐI9ڥeu1]bzpl({kiU'9;{3vwBv"~f1߹| ϓ2ZQ6[5LF]'wW˿7?F7U񽽾'c2ЮJYt(s>R%) ŏ߆jrQܜ@ #;5*ܲȀ<ַ+G8q7%YJpCq`o&CzwX:#Tn eFQdh+9+oO͕}, ,JSG(uX)P:"$d5x#5*Ʒf{uVԬN3V/S,21 LXMNhNs!Ȥ6+3Jf Kܖ8kF&zbQBm 65ɎtQI'+u3 0 ʨ+2&(R*4'1ELy8Oph$'%!RdvqJD?ڑ*nCGr N^gː uT81vƎW=akGZ-WX-v!dnc$7fb7ǎGf1vJ%xUýŎ۠ũ%8v|wwO#Q`}ZvѤvD%jF;#aj)iб yG mcƎtXch|UyFp@5BkʜPܘꂨQ2}jNLiHny!#~Z90-ؘu`*c m~",5ڊY KiisY^"w̰R;- XYêPRE ʕ$4jHy Ie2/I Y֗K2IMfV_-?"46+0\kzLeK -S?rThQ3#WB>8 rm=8 xCI 2bX/E*V*y~{aO1$#-F-[%WXPح䎆q8-l2NC5kirW2.n)J PpMd֜޼IcL$d輔XO?3[73j J} {peM#>W{AԒ]<ϻ7)9|~uy{S򎴄S@rHmBH5 >~v{ p± Mc@rv@W\I!(Ԫۖ^KBr乒1iȤUs#)Yi6+,}Ԋb'ĕ˖ `l%KRN.>4C!%iRآtF9=VcTTiJ㰆PY{ӹN7,A;]1 E2CDR:h4n_"O}=wSd9[J0䄷W"_QBxEbyw[[]/ =AY+npf 0ƫ;PjuKºsbr[ ޛdS\7tRǡ"Jo8=㔶7߿T{XL"5a6F}R)~O(8"!Q߷aZⵝܴ A⯃_%>D6Rrd4u԰P)piI=ʤ#L)Ie؁b?Y5$9sb's[ D0O5xCM2*VII!rI9-W+ ZSfsjǢ.%W⏄v-şǒWĪ[sBRp,)Z.bĊ? Oʔ΋1O03NS ќ`MNq0R.,qZbJ8\RYNGZ ۱rEk\H`46^]1 ?_/f~\('Y遻[Y^ ݯ(@d7r,¾6_40iXD ޳ ?}}7+{w%:agU'3_|(~gw1<&on"8 {p \b;lNЪ\!T`hԸ(+Sj,VTpma'jl`g̣Tb ۂ^#r1X﹔RqzbY7ccxjljcpvqd\$)'#f :IW?ؓx-;fEa[m-^ ;S~v3y)H'+fۻf؃+A(Z0˕?H٦c%ˉa!\yFȵ2m䐨6&Ppi:]<-COBm,5_J* }|e D58g'?IRpܔ #J+*I%POSIvrJ uT,0wYgg p r|sR.EظLS8;YXP:'Na3ݣu;0R[7a(ý:,!(zոJ Ohwհ)h΅# -NgHX!F1=Ra5.̣+Ĺ=~NFP-40JBQ?K VL&[oy½0⡌p`E)pݙ2m5 R΄xw "SV0B@1ϽaC 2f44qo%zP:D(e^ )O0?.\T}ZsT=Z[AuJCJ]bcBdΧJP b3RRxar^ Ko0x3SsE*^%1iIۡd Y}hE WCXsR%O*C N"Б04NXr'|d.&ݒ:]$YDd< { )l\(Ǔ ݊h(T.f Jq@TkTF#^>S(E[߇ D -6N.$jܡӷ@(,P;Uk(P"wmBj3PWmHPw0R c|`TI";DT:0_y%aZi9$Y% o'bMb z'(fB–l?VD+aM#5_/i U* Q0"]>q~*t~uyꑎ0 P%`_\~—cԀ]\Pٍ`a.0"^f%.63MYhd$V_V+,"tb`phhhG7?V_rv wi^-~3ջ;nͼC:NyPǖ߽́W+;~7#BEիۋ?o |Ky!'='f);})F]?^߷j/3Kr# H'b?EV~l!JEQbDE̯Q[y^  p'of'_f#k~G;lΊ%W=-K[CF{?/?>@`oGoJ\}n@hi-`45F 8(nZ.?R$!jq~Ycq+ e,+透~d,hܞ!VF,b ?#`4RT(ܧW 3de?} R?% 5J PT .L<1$4:|DnʱK\;k9b-]b@1ĂsY*j.`U|9 y? vW@9(߱~)(}@qvZ(߫<`!GB(V,~%oJ~owHR^ZtgMΛef]YSNs-YJ::rx'Z` ⠵H2xEH2-όp)SZ#f˅x1e3Ăxx͎.'9W.2a4!{]%VX[1N!2,qCD,t}-Fh8DbH7HB')h"#i.@W*\U,UQ.#ߒh)h0%PwK:V"5Z|f{jMK^ ,b-l'w7n: _hdK߾^1gĎq /w?>!Cdu^0<'[|ghB,'}`C2.2pcWc;ڍ\MɃB9F)yE4Uv%v@B^?ApW\E˃";F֢2ޘv&vBB^Fԩh7vAщvk1њnфnUH (:U5cn .hX1h $6ݢ -ݪ.12%YԎv.hX1hCb=kLEZTU!!/\DOКHȊ[Hy-ѫ+[ #s꼢Q>$xeߑ9d{r_UŒs"3mko "F4m*PeaNC!;so*,GZ)\-b pTXNk!,9)|.YZ~gNd#(>'C{nʹ< {26 2~3kFb9a5Ӏp$ ;&33vA\׆KͫFE&Q7/P]@!ҥBk'YKX nXo [ i j>\_ 4:լ|P#?XoLn!m$+,~Hn5*Nr9Wno]ݻj03"I"%HJ]D׏ytO67K;w֛:ehrYvt6[9a WMn1V}c{-i{+ݼlǨyVK1szbb^"ch8#=_oQEvق*,{Su<ÇO>ƪ};qI/99/hyL9BmVjvݲ[I۞;tpIrSƊguQo>x|YoRtxaKO^˧Nu/9BKTu$ *gҟt/pz.}KNBSN+'r8NtxIs Q= <'𒖂(|UvHxdA4e6.d<&%*JTPs)uRVh|ST?KT ?5Sp0n_,)^u㞎)Xwq:EuZ*-t 'Scm*qo); @SJ[(G6wf-ׄW595Ya<D΃]|>|%<)ߚIxI`b/3K]TZKꈥH 7K$*hu30޳Zkch@ =]Xҥ%UB$QC/!TK|Q \d U%Fs"JM'R1Q G "ڡ(p?`ZlRM(;"M1Riuo""Tn!s)^OBBU (3Zr?uf{#oR_wU=zi'e}Ve+ A (k2 1+6ZHi~FR:3RbBSQBVd+R4Va;. bcolMga|>5%`/wQ{f_jSND z@fOIf&k3 464EE%THDyPuDWyj|yJpS굄 :/aNj!}<] Lx%v8g]7'l䮧hMq&H{$х^o~{+c8y̾d6.Uo5L~^2^z- >A@eGf8N871- -*}a&a+p&^F}4S+rg4=߫V$03?~ n1y2IųS7Oo߯?!?7W_X)6̱zh `BG|~zE׽=6{{7r2??_gxM_]'i/B 0ɿ]/~~f ȅ}Wr#ⷋoIwF;XAe6JJLՎBl(0G.tBD O4L*"S(8% 2 ʹN2A>{fݝ|tklG }UY篯'}Ulo07iPO""(2(1Bf{@3Bӂ^ٴסzWftq^}"_O@/H5L O~:^/{%ކ(u͆/c;`I)X&YP$kfZNnz!y|M1^71A#Y!(P *,Rq⹠r×h4_ T6ey5nYQR(&'L$VVi@՚Sk#sB3)F8bFœ(>{o $4&>ihLΏPh@1ysD$y9Z"N*m}?t< D`iG1`ΐBA,Eyl;+vPSαǒT09d,[!*KU}Kcr'v/ _z lF\  _3gf<;13xkYlxLk"1-$mb=DykǎlIa4•u! oTZ߽NEM & `pԴ^TH9 ED sC׸ 5 LVڞ ̇5lw,SS0 Vwغ$"a#5VI=D1w>/jg`mYI̞ZVkqB}~z v >)ȉUQ=T S:W((%8(ZBHPGP(u^8"@-m);/^2<i[dkKsF caΐ5j%Y !GB˄D0_YDbd@&xTv<*h9RPXIǣ2h((:ZRƳp'%fă+pL2* B-B|áw؃XB| D`xe8HxEa* #'u y×+Ž1G!wZN@$H1HLxh,0x 3T%SaI, p h y if mG> T@ȰF9|`F5<`+'uL6OМ5IzqMUNЉab}k211 ǸCjCx1]}ћүjLHӋ-_sބ#Y:e]g4g#$ #@ct63<=6rЅjB!1kr?7OXg L+pyC1ж 1$Q4!Ը)N4.:FƆQJܧS|CY&.8I(rEV)bׄh }x6JYgQO*m?Ѣ! xn^Vjhiӭ쯣Ѭ`(̤NPD)#f4D%AZY˜v]v@䍟G&$* )rͤXGR*0!towyyG$b($ҫ詖6>c˸nܭ)  hzʰJ@PW_ſ'Yq.2*qr z+ uufW͜d/wل./?!sɳdY3y3ҹއ[j55@r_}_\2K:˯YT4J D0eK_죺Q ґr-A*޲ qx[q^˰F:n#\Ca7;I3yzk?z^z;zڋ&YLF]1tb< q,QQ;G:/8).8-.#BK1ek183SJ9QK'Կpp]wu9!ʳ k#8q*4amM8rЄN2UOoP7KUi8:)!Nqu9N)kGP$$VT`kZ8"9ϥ4'&x*ѴFѴv];0noNM_{J [eZͼDLh`b0s:~. 9N B{&J0ADDGy =zKt^|w?R÷8~kmW:R M߃9-rv-4]IږmB\m#3rf83&3r?.cy[Q#N]3qr9A[y5ϋq~`WA |y4v4_)]o5lˮW)U"5!`QEO77ilTplj衤Obwڭ1H"Xzܪf󩈃*i*U w@LK~v7ҿrJ9UUfKZ 8U^M߃YHwW[fG)K PWX3{Hƹ'dt*U =|G0{Zߗgbx(u3KR>) ": ~nsRR~[N h}=M5msap Fl^bbGXҐ{Q<BE;l,@ PWw򑽿Zke,>vJ(KjX^Xc7V3LXB{Q4 R?Ce:y4ܹQhfŮҝ &kެgQ<#l'RN޴~+͘J}ހ9-MiCyK}La)C4"=roJ<A?.}_Fq2T#nbo5=n ܒ1" d8rt`I 5 F0Z$y6"d(L%g4$@Y_&Fb.0*PE ~6pWKm#H@mM<6{ۂitm^l PLț=v$=X8" ƙda|sIUIYMY'4w~}J tԑzW(We׏~+]Nnj!ئA)$ ,;`)ٸ|i~M΅~Ln1~؟@g& u0R@JT_CFn E68N2pzE)1?fz$Z0冺;fvxg̿NQ 8,:BJmOA*' 8beİ+ԇ8Tē\}~|g23izRD`(+Dz;|$J_ܾNѰc $PG+n@ N:(tTئ1 r(N! wiKdHT3L{ӊ"үGE@@;JZ6 qrjgL_k!@=Mw3.*1?껞F-"#+R&!2KN^/c_FUЮ|UprĈDCd=Uu8({$q}]JDM{{L m_JA ْ:w_*gƌUǓv a;Kg>Afs r<[(8z}:Z].z($;o;)VTL1 u!$mfI[^ bm4(/O"}O$ɑ0*?8mB$P 83ȇIZ3hL5]/ >*ek>zf^/uTAhgPPndPcwUW1'V&^\UB!Sm6[1fUkv1.hynW (lwG-)ѺS3~r^_ʅԺudg}R]'s3Ux6M SSA((q$ciJy&><5BSnz΋kXO+` wIk[l4;z.{e6Xʶ5^8tgiM=LKَ~͠w"3ZD s CN5a<>K͜-K]ya<\Z};c H`sZIɀ _cb4֧3 mY[hvH ca"d"jcQΨQHJR!)H[$N Pst %O5;1rIY,x)P"sRᦔ#Fl*CoB x%>qpX7Ye!o ֌ '"BmMW/gA$h^M5y:_4lXP@ڙ$'ݏx}O@Wl|;' wPQQ>f[-K@À)$+S2/) ;{svmuӠc!}`mƂ S $TU U/h<^ُ#F C{ &w0^8+{܌H61v$^oprWrtW ;hșIzĶM@dݍ6ngkWJQ#Ĥ7ML8qKB=Aғ^,Du5NxWC.o?ݿxQ_xnZ1,_ZZ<{#}'!"F)cq84eB)Xb)a)s?VJ$tLwعmG4Q(ry` yCR֧ZaCz37 Ҥ$NB"!Q eʙ>ʒ>rPa탶ܿI%>N<p\7 4fC ?Ou: ~w;rTrm];xޯOd!Һl@x*8iho Ϙ{^V_LE$[8fADŽ`=`,} d{&PP 8XtLjI!8N_9䷪U_A|;S꺪q&9O[ߪcDqB(`c7B p[}9Gn44moO77N qe;`<DZ~e_t3>A Xm}GC\|Wlr!Nq >)jֈ?7ԍሞ2o?IO(3+GWDx{l*P3њ0C+[g'nSOf6W0%%\0%'{XژssEBb>4뤓E?QtbsVr߫ίJO8Չ:c^$O xS\3K85O/2sY-g#r;3HNFH1 sJf&띲 @TzԌ{D2#T_1D=ݣ""*|24%7mPоCZp8[BV[-Kbfd4y(ڬ+ λ*^"PAv8i>7&L,$).l fv;}<5ߞqqD" `$b.xJr){1Gy+̃hj>m̧ձ^Ǧt=3m7xKv3N8YqcWXnY bY|oMks~Kvs3wx^뷼T6SN_=j,BӉ Tֲ};\oYLU/tDm+ #]z eߤ]:5A^*C9.nY2wզHK^R+9Nz{KexR |o !LBNKv7Y"C/#ڙJ&?j]>(uqO̲ҽJC5#q&El 6@+( [8^&*Mn0[-m>e+z^nJ,d<{ 3UKzVuo C&?pJi0v,UGfP:=.>Lꃸ0CaB;OųAok#D44R X!XX}9sd} JnAkZ{9 }/tu8uWRQU*cȏ;Z *.Ҝ>MZo0ʒ!G>#Gj yj.n j0!zpgȪ|vIx\;"~j+T?֧#Ot݄xƂr4HYal[JIi&qNZx'o-5"+>[G&$ q)Cӧ։#?; Ш -+k%\D; 1&sBDOwb[1xgc1wpY MNwT5C(AͧCehЛg= (m u% [f{-}1Bi'( f9&9*EPBHr<IR^}ǢL}28ViųuGݬc 96}𦣾hdwv뿻?t86t_u{*+sc:L @X2HڏJL)#¹Js(aQBqCv:(4Tϱt:83>^$ ԯc qeK- YkI DbcucELIDB_45H#D hem ^ }BuwDSeݼ"N`iAˍ5a.Csf,2V{D[IGt'Y}g~>i?Hhw]8g1o FJM ӫf9 N:eȃ]}o;(`<ݿ3Yx?oa+ҋFn]7yR^u;O<HQE)-5F}3ȇ[?+ l|rYe)I侣y"<=?Y,|@2N3锭ŋ`Θ?nK ihQYvuyM_tuW,Z.S2e雮@-b<>C xP/ IMG?1l < ;%^,l)js!;TMvS*YU ,W% Cbƫ`҆8z>Ovbpj뗀W=[t:`wy%>p7I{U\P\t *ň\(*vtV4;ߞ*X:xywomH&Ta$)KKoף#@sM6&8Fd̐1cESLiz+>1`|܅,z);>HJO)ϝ\1q`+r/xM6c&Tr^!yrYDH19ĥ$pAَ/-F3Q)GW{'dWER?g pN-Y -#G) H!Tiݻp{2rR_9TS$y^Sy@aOFr>JGT+b3ѻH ة&(#B@Vz= cYmP*4ܓMRsAjZ5N?]0$\@X6+&͇ViΥnEq *>i*SPSk4zjJNrcf*f\DSE~aV0ճjEJ~ѽ Gw[a%"O0zx?>/h~>΀Ss%:+Wsi;چJC>xESC"J{1J˛Ci8yf A p"r̔cM ˸^X%⋦[at;/b>YǢ^/-x9bʩt^8[__ цzz4,Fnqdc [+"TKBۻNӜYS&`RJ *kd@cː`xiwpKvDdK{-ՍI=Nb"(JY:81c҂8I=!1b ^ؿ+XlPjr#UcX A#ebJw#&JڥKXwz%1OIL-r B1v ^8`Z%t&;$(1IGyw!XD0--x12YnO4Fu#AoG)H_'5;IXJb4c,Aa#"~ ;ÁjEIG-B;$0arc3ʘ 05mQHΦ)- J_-KD8&w$r:A=BzX֫07 ʃ$k(/vC$kʍR!IGLz" NH,6 X"d7[~sof{|ZEJ*F˸۹M98ɚд=V䮆prʴ9?z[pl6 LW*aU4(g[XQ<J$_GxNgޘ:PKb|kOWx72/c堛7^տLv>:12Akw<h@+P֟Vjj{X ȏS9v,`; 4ޙ`V_(0_hXm{|D5.f_6Ue<)UQ`E]beuոd7WLZ@Jb**]};&<"J>FT2+zZK ;R!s}Rwԥ8K)=&qC冹_L K7U F=?k`UvRf8|}26O/9Bg%ӈ/ ~ "Czk[|hUrX^a&h^a2RXdA9,õVQwdWweO|R"ʵ"KX!'p p:(G;XAGf^v*C[ ֤IQ633]$GV+ئ )btީ5)W۴&Qظ7 bkXOZtȮmE]ŕvᮉQ-lnfz``X$QGΣF Sd+*iyDFac)0X4zަOKx 1Bj!0qJ)QG8ce\tg¢Zp0 x6)mNWc)+v} NpB!,u&n3Dw=j̴S| tbrJ7ǵC"u}G/wPJ/k~цE4KkB # g˃:&x*`JM:nلnmH\Dd@^j7 [&jN;iㅵ=?KhmւE4C49*y}8QDyxA I ?E$iβZ(sWPΣ*dL$k}}|ѫs -G*7<2NMؼfD˥s;η+l7 ` J?Dgj#Z}]n1B;G0mmDѽoۧq D.Ïj#Z>c vweGu]˚}ڴ/f1K> 0\Ё2k[]ռ5OrT9Fd  c{oi^knFâFH,N1Jw5SPq iC)X~E쑏;e/^;es=!uiSHժ9fx|RGQ<;&0 }ɘVTM??RPvD w,p|_1={hfg bؐ?ubK`{+B:͐S:4;NG/5c:9fkVU؄ `^kgc1 !B#u>I[3ိ1`jɒ_ɎpK`flM k7nqAx[&ƮDCji\ͧƽ^-rX2ԂY(} \TBQ^{M^_>;+|.8 0׻Nj#<GzݏNFq2f͛Iz5BUέ_; 6PG&mnIJʞ=T t-|_K.my&`b"CӢo3X7ޮN8 w:9Uw̾YSpt7G\d.ƴpr_cQ1kS4Pf0I$`1O.JDDLZIA ,l 1CD׃3"DmU$$(㓸͹0D4&ݟcpk١+aLlȢ-mF;YW < pbD.cit:;_YTx 2˂2zxnLI%tfM7]"5;rTi-̽.]/ђܦ|4Mn}z,=ͧƅ'1a>h^݅ґֱ~ͻa>2H8 [ !BђG:,Pw"W3Eϩ"Y0yr8 [ \$ 0@yu3 [ ԴIËJKz=p49 c]0\b? IIX0ǀMp6v/Lշca&Zt,ZS; "` iiT JK+<_VO:WVOL E_G'զ~B9ۂ Hуⲩ9NeSse-R3/3)*3%;ȑyAk6E5$ѫTE7Y=>ܷ~ǧoJۧ}rvm!%йx=w׷$h#RF(W4֎?$~HgӇ[7ʱN^24c/weI|Y!@O==ذǘhSIJD(Cb"%Zɬ""#"3#M67*` c@J3m^(h#dI$gų1/d% 13BkBf>/~}Z`^"!yk 󂳖^c>g3FN6X^cPLqGl 4c r~CS$Qn&RϷ"q i8}R%',\ ;$Ӫu kRJ,Q#_,hPTRIg-~#(#k_C%nnp )߁(z?ժzv7XTdJҜp}ߟ?)3P"'.Fh~|Yػ)oq8Oy=q^=@SIJh-K E4̂N#F4r`PK"JNѷ :T60(`XwTq׼|`~*2_ƾwfcbLx YWkK K(7qK|]|+L[C}zEZ_4#v+}vd K'VAW=4nShkj!Cv}UV?}PU?:bvk>|geV:_$i?<5*~&=.Zj] XY>' FRp£a*0a&=LKDhKMV,u!-Tq1+4 [_ lAB3&źtȴٟ #Dp V`mp,uYT4,$5 Z? Lq+QkRIP4z'+E,\I!P`8Ľ `W ڨ%FQ=krf[N3  R$!5KE]NkȣfKLY\טq]-&"ɷȖ` 8\0ف "&!nR(1+mthCFҌlSQ~k$@O+aEH s`1HЂE4sSK0B!Fat tܹ`;^'csFEj}0q]:.릮 `T8*^m* r[_JX>o+_ß'rn2)f[{Xj 31k3(qֳ֌M`/on~-K/--geG6R/?6 c7z+jަ-2[3kT"DA%ϯ4hM` UiBG!bL)ǂcH+eǜgVʎͺzɷC?z8/ձ`BYd `"02eۭ,Զb_]@Hi|ԅPXne`6LX+6G[r$$ }-٦u& &}U"Yi-%bq|Daѹba?t91w|nMՑwߝMNÍ]FWQT  诫< Xu߄4*wqW< zq7qKO6X`yo彍- v>9@)]&s636kbWZrBQ`XEj Bҏ} f|>+vTSrH/as#_F?E%HF׉KntQSՃQ TJ\&G՝d^GJ%څ6 0%2OH.ȨX#MF1d8cĔ$ܲdpJHIT$)w(T{hK2HoCIM1p)1x,DH9*SD{Een7d Z7\xn2@ ba08ӠrLRPk3bRf'IX25Te Z^ am14DŽK)rMrǵȝAiH2N^IU(k-=R Ƚg3bV|8.GqXqD! _s0KCiT Aq ނL$BމkQ@HJAk)MLg ~ Hm[|Aש#Ҹq&VT> L@D>L(F;E#-Bo W2xkxHtr~,N3ǚߛUJ,s5{r沜#7,81Lvf,wq[5{s<Ϟb=Ą= X`ł$cK|Rc1R;rDa&sD(qzv/29 EkhyNQVK o2,qz06?`P'ΒygTТ  U'܉FQAAXĄXe gE|Abͼ5`#}.=,Kƨ'$^P7&DZ3Gzu'36{ObՊunJ&c { 9te"^7s='L$H32#^g;G2jʩ@LcXr06bupQ>~Fer̞|_斃Ô @̃MpĢ6Q:s Vr@V"5 ki@o`Y.͏^,\% de?-FfX/aw"3WW>;x.Cv2+i.Գ%<ގ44y:.=y*HMQSQH̚n]g{$cX&c4**rw$z8.<.Ծ-ID " K/}2:Pi0s642> k3X;yT@ XoWĴnBXmdLALEPms dsBF8ʤWOasyzo͑pTzpKNu2{fy_g|QpY}ն9 ryP8~>'9U<W$WR 'Y| <Ӱy<AJu ϰ}}PWvnS]RF'pUN۞\R PB(YS]s/6uܜQ((h܅- p O^L؏KA;B_A~,{{Q߱K.HbxsD?Ϙ(d+bBWCސ_>>q,ɩA}ГtDt&('%!l#9m;(!sJ Bu"ÒvaAgxU{.z1a;MaGwdZӇMfwT'I1*c9 I݊d{D@gcn(g;yw9 62dhji|W-ŪkVR=bEAv3JJ(-Tw \]~t\%3|sS7%Ҏb8booHFDM*IE85M:<N~C)h27j@ H W!Z={  V=(e/Q=CM!ۗu/0ĝWaꥐ;b,d$ ?d& ?Mչog_[&_OT< .`9[.K.bTsˣ_,b{ͱ ŵj}`s'ZK$cEe"x z`r5wo\{{صp.ytf<Ձw: ڥw;OJL;{svǞ48D'3$]GRrJ?Fs0oVg0̒ ȅV9d9XF/t_pr6Bs;~!~=ǂ^wϱDzwtsz0vu='_w &zZ?].eɋI;D b.vכnw~;m(Sr\"x#2)(  * =BVIr  *ŹIď^l4L\_ڿ4v//__L;=#/;S}~PtyHvN5JNvN5SukF=˂ AIgb\`8cP_ 5գEbմα3?\tJr>ܬ3Zדٻs|WfQ5E\ӇR ;߾)45m識1WoݛXxA@.nvOrN峽6't}fl_}&8s}^:YO5ü^íY &I.gX1Ze1 Xv!&y0 3_֌o^<#x[dj0B+,hh58~joW<;%k\KXZ/GNMG-_{8՗g~4wľJ*^=t\Ԭ 6D!ІWUoD9jni8"~>8y>+5diSN^E?-Z~_@؈hRZapjh/+n`Sܶ_ki װ|$Anހjװˆ3![_5̪a3OM}ddl`XO1y *6z yzۄD?ca|'?$|\ɻ{,s4j!c#v:[A|;|o>0ĶHRCY$֜ʛ1ڌ9KFlPy>6sdC8y%{wΒJq;ZRgyyL{@֏vu2EJ˵߷Sx{ֳrW)uO˫ZuuhEYv7GU+HlJ7Jjiw6_z ?ǹ蝛:D>AB)fqq(kqr{ΒNE{UؾWuL`ܢ;@).v]8K6ݙ{[h9xo,b`WdExBZC!gɎbSz%f6/#+qIi.7^k} `CԻ aުJkZʵqEig p{dϒcAur5RyHAU[­^?GNlǻ9!iW$-l: 0a{ybwYx!m$x4 EjQ0X:-%h C9ؽ }Fa8p -gkIh!?\}ػ^7vǽ&YBmTۇ)-n.ӳNDDGw|W;Љ)Vo!7hO?ntET &?/uc__<[̟;>̎*ۣ:%{ڨjn4.hc7&|8x tG7ѹf.9E:Rsۭӹ|F^FJRљs ,џ|}qpz)I)'Jwzx^';נxQ9JNwr (!ɵN5XNjwzT IGwz9V4V;6xKDz)L|_ 픲ܰ6 Q0^% ӃBU6hSVi{L A~_{bTO!&ݣQ4o!#a˃!teRj( 5ICbЋѐE< VtLfzl44 ѻ)y//:V\Fnj#chYIC +NDC*AhV7v9Y}i1/$ E 7 !/yB˥`q.J( ԣjo냣BߍQ=٫,, :N;7 PS;B:&䓣N82JbR*oU|4M1%!"S CSV3Cf9W?Eև FiS CP`U2$M!P,fPÍ2hAuKb/˃K!0)NO#Y3CP!{ J#'frV $SA&ZVbuN3CCfć g fלL t/q1LIyp1λo`xΏm+~;7Ζ|OAG*|ܧѷWyՃ6V= w3(&ew~it*pܳ[6VFí|=?xvշs%=+Q-g~QX_=r""S"V+Y7uTx9Qe"xg X: b*wşjUi}t_WYl!J՗s컜5(P\|ߖ)2No_R)pG'p-`|;a^% *XIHt@6)_j%% u NQr]h~-+0y%ܾ6nW}+37p6a~|Kʬ 0cW: p !SF DByo"*(餪!F+~~?b֙,,% i_3<\ \?Xz瀻 eʉd9 ӧ4F0]%/`: a&B9@QJbrLr+Pp܀yJQJـ kbx  ߉*,A0)&te.o4dGmg )IM&o G+}{l0[d9qsÌRXxT0CBh-{jŝ!T*@4b3gڍTúg .{F꫟3 <s̓<87w)(*X=Qj+I?qpn}-iRKj aM7=\&=3[ vIOBσxrs7+i2f^]F2OC %tDH\;:n=8a7>ހ(!E>i%:eRs.IiwfalNQN[VU.UmFMAoT|,xJŚ*sF7kV.qd'rYLfFy~(v`<76i(r0{kNe5J17ߟWwK7WW?NNpU'O痟_vA$|ȷ֜v,AE{ȫ>3^b4S7-%@ay3<ЪGǜK)f.T}-ީ1O[$Fǹ[?ђ,'C.g׍2eO "wdD_dµhx1,}߽'/NN`l=?Z\3_&͠3w8 n~@#b<1Q2;OkI@xAûo/pڽ.l>2'B lƌSgp ˸^X%m߼Oa['P{Ҭj(iP3РЛq(%=>; `t5\^EVFa~8AXb8 H0@ZCT-ՃBgqi'd-bx~CVЀ1 ҌA =Ƒ^sZk$x@ dbG.8AvΐN7Y/K`iCn4 3/@jMCQv*BZ/@٬1|=Rpz=~.ӁrdG{υ5#%JLE 0<_,R,Xk_W._S66{! 2?NA/KW-utY2o/o_}D*lmT ,75Ls*}1¬ttſA<@=ڨ(gV [7I|$re^ү|y:٦4lw#LoS _dZΓ0'a8Op^ ܴ?l$A須,2G=1ˌT[T#F;]NbYCZZnC/:˥ I(\b9 "vA+f_)1 l.֋0#Rs@i7{mrA28GE$NDhn.ZT?٠دJ ΢ژU i%#}\r}슙|ָHZ:]HTy!n 3ۅD{[HDIEZҢOe(FxbF5*@(r {;gF(Me@_TVuJ+"M{-FC8RGA$sS\h'" -Agw<:=̶5Vufcf=f)?G|*!{ V (;@ j90m  !׊Ψ+__4_rJE܍HJ!!Xќ^kqITp݉yZ~] p:|BpwW \g9U/.yM)ȣڼGyoɄH~5iݐ.逭5X*Wڹ-CF9fkڣ38YZp7w +Vc84Uj꾈qs,R;m6u?- 9_rdًw[N韂LI[h|1J/IDʳd*qei *uLeE=a m Q'79L0!r`gY=Z_4p_;Lan-Y&սOPa|t} J5-R |Tu~~c!8T}>+gT2(h&6.{=)][Z̜^ssw{:kwp/F>S)poMꏡ~< Sm%D\T8ґeʙ1|upYt<@5&u4AV(򫴷qR*;'ā6tz,ܽ880uVDU))h+ay5X?sLbb4p-BL8 0\8KAZF5h靕XLa:` AP*?Vk1#qtiC5h LYG.}4971vʊY:/ڻ;/ E Gii4(':/O2DaRf,bmR~,d[XY24WXn^L`!d_cL1,ګk0H%/q<~3ӝkףiʨ+H|I6 Ar{e3kmy&A-d$&W*Vpriv`4 ޏǏ&tCW' SO9wKx>UZ3S Be9UBxIw ]%+H Ú nvy_}_[;+lf[o#ƟosQɵǃW.TAH:8Zx:p+⩁V;Oiqj=W1̇Y 2kC_"zcuW !H1%~Dc@B9k2SjUEG)@'Jv79巾LQ)%ER9(ጉ֍k9c[Tp9p5zH*5zA:}FV2"l0L+"sJ %l25F XG(5۟: w9QPQSmtɪNQYf>vX*c5r_{܌\<_=G5>g#ٕzo7x8YCs8_sE6t r]Os]H/RFɿ?)/8$yeZtm^/o.umC$4KހŏR ݳhψ95F/PC1S7cOTOu8VCQp6K{Wgn*ƉСr*P+Ϊ<-ಒWq`ylp, ]pEmg$}$gx0]VRO,g1 ht:ĭ 3C飹̶F"쏘{UYӥ%pFV\TYAUQSMLe Ɩ.y-= s!@F=(WE"veP)ьXb>>J!⧄*x9{g}>VY,Ѓī1cL;Z7& HZwGB.; #_ͭ2K:kLjʝHnj1UrU(wk@7Hh1(V˾j5@Fx9IP(.zCp;@uh GhWGZBPїPC+1RG"zh0J<8P&$Su 8} v 3u 4]n.r}\Yn1Txf9&P<*SHRshsH$+вk^ͯ5 yEJWIOrчInD"wN}mZ#uهxd:`f7}/FB9%Zбb"SSA~YB1-d[a=/r"/rn*,C:E Oq eT1%J)k^GF~ބbEVחMe:$BYVLqͻKqI/ŮF/ *AKwJUt"(ZGջ+)QB>Kl}õI`X'e $Y $6jiSɷ UrӻkPrhvȉ](('js{i.itӂm9V !w^ל6GXƽ~nB-[h-/9}w4NHCI9m&m4f:Rј@ L."Cj%H ! 똴EL.+X𗺁̟DߌIxaw9&d E9 9>EYHCJ:MW 'Zi״VUzjwö[;^Ҹ{v5Lj- r$.06Xa)׆.)* P6R"F0{ GOQdKnSqdsrLV-g"p )3QAh)Lp%㗘-Ȣ JgR7f.ABk{1њբh%.^G`5 sSNBDbqi4)(!ZqAV6^ C)ͶpB ElqO2Ə\9NrFa&J9"`T%֑QF!0|aÄaBǷ$.=Y/N4mf%Rg7Wl@MDS8l@kj(ToJ &Phb axVΙhgPB(- RmIG}J-Y\߈ù;=[&ڵ@y/r}r3L*xZ*k.+D31pQuX\yZNN6VlI:-e <)Xi}_#vBiqz8jDAX #P%f)ELť^^*4K޲5/9y aV~mxHr#Yr4=H/om7xQU1#Pl'. &đ5?4oHɏx M{j3*^Q FI\Ht.)% /C AOք,19qQ{]1Żx?pgWxJ^?/hfT7&g ۟\>>-׽#'?M?섞{=N/av0{wc0ݞN Cϵ2l0$Ldo'O_#i~AA}RU_]u*wDF;%y)Bzy?pGl} z N$ 2JXCCCNH8 dd}N/X sr h`A˦|BcD,I՘Uh"ֈO4S(C72 T?&ius<_KA@0`*D~NimFXQ7Atu'O]25b]LrE};<)Ky0 fȭ=M7Ai_e'PN5-Y&/FJ<=9Ag/zh@~mFN}mܞ% iP-/2[AM-΃KNXE ޒT4o.2 QQLow7!DH NlՔ"n<}?m;u&]+"Ř)I%U&1된!pk"8)xn<%RuB}oۖ.c" L!R+|UD}Ѕ7@эJP˂9`1!q}7p`4[AsI##FxAs $)>M|mIca,'$ '79eNGcd^о^u:7fQ$s!sQgD8EͅQJ5r뫗+su/ HbE`0 $4AUB==+Rx7ﯷρZ1 -{]_K۱o'Q~1Q/|'lO =cL1CdoGq4UE4B38OK{A)rܚ17U>GF~;%j$F@쨗hE!X{w#2+3!QK">^/p輺.`^PPrJFtw}˜^;MӲki6.z}m?OZfۇf|UPuf/f(e'+eehd[ 7|仸WԄ3>_#2 hąK%,_rmZ^w)r\r4YIUYC%&oHOlPYC#{p:}ūٗ*L j6`Q#^D}>P! o"ȡxbHwwF-1TQbYii_J{MFNy10F 4gJ?!0S\endZs D|`IGnj+#1A=K*$;ٳ rISK [+^B} r*iŊWKs):R*sW=#I0tr&@Y!Y} ^MyPetc c>OSzJWQW~,R-0GN) kSP6@)b =hB h98 iy x&MBŒ8|SץvX0XI>\>]};L;rhc;2{Z:`2ēJ`)3Z4ux43!W+ lZC`rS7 #%t_6kԹ-?UTAVBZ9u^qG;#W6N%{x6x Z>G"wc~z XCJ ~$gNՀqE!@(QdQиU}7c"40:pLjwzWN$WʁjqT#:{vZbe#P)MzayF D(ʸrfeDb)VQQBTmDzb#4 uVD}֙杘MLH uBB[pr( N+LTґH"cDJieou!G྆[7܂gWJFMbdˍzB JEBjؿػm$W`\ rY$DJ'{ݓL~VM{ܟeTHBcpwq(nK+B2s Rh}q3&) hiP3YBHgy@b*BB,F[-W_f {M H(xs!?#\T.}7[ nZB8(TnCzM2妒DkYQNۯCJ[WCn6mv;TM0S= |\xwo~goc›-xBʭ!S 5پ`u?A݁j#@Y7Xd z|b2OY`+BZɔ4rl9/~)!P~Q,9͒A%?fO.яQyBϙq>u`/>jp'a#/?b9md쵗he~rłY<Ƃ-@<ݴP9KFH1!.r?Y8.cS¸@p6%̖ѸP5#R!`;dWиe.`NITuntS%i{֩CPۛQig$ SHj-) 4#$Z*dDZ0t5(N2n@Y1pK-dpRsC-/3rgs++ͫ.\V ʜV/dw@x5јGLse{̱NM=mއf6 *ed_w]>?3?NѸOwvy+]L-<߸?s7|&xܞiCkz/Zo^s'eÔ9c/ʿ:]{g˸AZ>u^1G%}ʳon~c AưE=MS TAMEuzf+݇DI* ϕ0@+(JC1RWV*jid>?#>8:!\RɪBɼ K&K 6gH 'K NEn,KZD%0q63OC*@ ʸIQ9c+ʙuSư)E^ xQ{J^I%U,/6=G yp# iJŬAPo_516nG-LGa ܱur0J 1k&A> Q ɛL!6 \2ւuv˲6J{$tPRo p872%HNdtorU+e9JtҢZ|C|z[b4APz{pɵ: z LI.nY G~)d\nrrTh5D|LR\"(rt_G`Sv(kZjcw }p"{q^~xʸEz*$`QUdv3߼d8iPAQYY8*|l w/x8ÿQ1"EߡWA¿hv`/ȇjbm?.`6못,\/|I`pALPhc17+sc|9~{fL{QE ]M}qϻgּͣdF0R_gxQ6M{btlmJg؝"l]*![qanMܛY ߇_XyL+3A*5$S3R`ɳo2$FD.Nxu7@ nGTWBEj@0QYk鴆VS^ӪTy^r-g8xyK-ΑĄ. ķF"I[7"DZBT5dB#BuHmЊ}kDS|=p-u@Irl.#RP,>:̝2ΕJg:qY97mF;u܈asJ`DeHH1"-*n [J)j.'{vxTܯ3`cٞd2:yWB&Ҟa 1N!aLBcOT܄j,6)UO}М{]T"- !oɤ)5îέ$O̊O3lEo W7\5/0ۇFtieK-քp8E(x%WZ%(@ ,0E>ُZ-oɟo'[lmaeBAF[;oY6xߔMnm{[j}BA̟7ײ кK7yyo}wW]_yw}tM|UAqO5'<)))l%@,/ Rw쇢;9 iu!׮!uᱎc. }c֐Rk eSp֗ݙ 64-{H$~-ۺz?뙓n>sۓsaU*בcQ*7+oߗ6AR7}M4b9%XZ&uh\FMOep (Uo̿g[6E^ scxaYf2 @yXά[J3Zg )\U R+ 6)*[V[FHj /{pQja:$VO d*y֜MvM4Fb(nG9D.,?0 qUVX'b )4u;JEZ6ZܓIx݀{eyĒj9줰Zz%ŧ[-ҳᴃ Ob DpA ,]X7n 0"uρCoZgsMْ%tqLfШ$:<Fc{4͋{\D3/QR#Q5]@¨ku?^~?SUS-VZJ4ׁS_hM[jI)i$utRkظS{J_)$" F$ I'#ﯿY%o5pT! wZdV݀ ]-A-X2yȯ7 -]*A8 茧Tp3` v_:q0@wwId{thMVEOEF@@Je$^鼨:$&BjΚMkP8ھo!C"Kٷp1& @uxBޣma]PMԔs.}5 m[X_ځ: l*\жn;hu&[̖ 1E[vX-xpɸmެ,ݪfv뜦* tK,*"R66*g̝nZ珔ukh|g0#fnsaHfhGQ I ׉\j|t alw?TiA*kƷC!Ds%rߔ [P [́총\fE_ȘK8 ̘$*9Ǐֹc16ޜ bcyb'rcVK!X7y_6`n|[eRj)7/˄T*+pҝRJD]PmxSajPK;%r <뜱MgrH0C\16RZǚi#{Q$&Nz;yٰӘ {NSypm'Z S5>YO~vM$]ɋǧ|D#FzDzf2:wV?-⏅Ԧg'^kBbR-F,oه۹o {7HqgSAl>\k<ӓX%fW[DE'>9 ֪ZY'W`!pNg;<Уn9X~1wlygǰ}OeU#ꋣtW՛OeUg~x7֓\ᘘp>n[O0>+r[מs2s^;ymړAivEpq;Q|D&°4m ,TֈqFbm˄_59܁mp"L42f]\y%.BWNqgYEɻ΃Fa.n,]Bc $pN|ȝI'|bbDPF>oNBn(/E*7ei 3A3OOc+Z /k'J( ϯASSHk9a|O9ahџw煳U7fT4zٚ[VRКW}]3C.`|=twތL6{;sc$j /!19wVϙ;v"FMSp ,`o @RR5\wmmu4ƭ]:{N\%[, .ݎDnƐ7i3CR2;x_w/htWIW#!x9Co-evo뻐<WKw͟ $.tç9(1(׮kzc'>bcNH7 à8ד3 z S!Aˬ|5Q.Z4&Oȍ:'Ot fR(J%*Ğ5A#&J8 ד҈ LC<s|iNIMCNmF'!؂ZUMnJC8l0aeM^Y 8>x󻶴Mܽ2/csi5)-">SXH(X0&N4΢L$V:6 hF8"RZH43UN4 Hͷ!{`XrSN8]3-?UGAN)Fw\~nyVώT(ʘRǏwv؉,pbzvuwuu$a$=[sɵUkǓ7I# *) r!qmpB'in9nt8rgnH!GJA)muL%l5JDzb¿LX]th ccYb]b!'kзlPM4{]q1_8Dg"3U"o' %,K)D2ƃ G 5-}"id{;%yv},zvSh'XF\i9`z+~Lz5YhxJefoa9Zq 1X@рd1*@l Ȣs`fk/ܕ3Bʥ%}mW1ɋ@ d_ƅ-13/ 3#%7xR4B^9GX&ʹh,fGW d*h]$ ]@i3GЀ|o72*<Ҟ#',gX.,x@ٳ=  VS`r$P^3#B-H3;GB4xi<64|հ*9x]ۧ'q%ՇY<'RB#g!3d7=SWZ忚G"n=k.Cm &T''HwcT3c"X5p`4y n/MnUiԌ ނTy9ۤ^SB,*pL1#mBaɷg>]ؒ} k~g:H'I 1r&'B' 3Rȇ%f%bs"'r(JxF*{Z<啩 6`'1XċFX̾D $ 9hDм0iB범`Ad*\*I6lKL tZ8/9 H))rel 7|v#Eh&h*I=OE5Eͨ#ڊЙ\F$YQ'q::A! Gڅfnd{ɟI-Y+ْ w?+iyv\0'$ ~Gnl}?/SCG-⬙'nzњU>5Lx74ʔxq8K帾{pr?or5K|Tb2auX;׸agofOҴ L|z76G\2٪gzM(Z$+z}{}=me mN^%t7r8\sOӫ/ It\Fb(%la٧&0_1RÞƲ+VgJ>w&Ԫo-vbF\D mLMژ#[]\žp0Σh0aTN&<^˅3ezy -ɡG P,P,P,P@U^"DRTDEft (Fӷ*`Q"DL1vkہZ彶|g&+3G`ٚS#0Z'wrfjK ̹6ݼ!R?">GSι0V;,2PZVp* 'c% =!G jD)_blA|BRr6WA)jY{RƶV.7t|̅&Х/\v1rnzuVzN~;^}5ֳ~,N dLUX뎰aZ2i6aeGah-͗`^#i;)9j85@kX7 eRYq6d(=Jl7=3>$=ۙj[dQXei|{x#|ݗRYWV:!YYBDžu, 7ƚ~|Ѧzqk/E\}OAL(1C$#]$pFJY_g]J-w* ͳ-#[{I0FW~=>4&JOϫ_30Ej}F^]c4F&Wl@σ2 w)X{nGY~91BGr{m}چz>81n651m%#/,9Vs^/VCxtytytyt\j\'VlN%RÝ(ȡκkqj3kuA`Gp`Rv\amZ-.?'xT²R-sNy4YVnoLf%EŻ^OϼENН}>=5B_ޮjr2fg`-XT~]\SHvLS\(8qȂ4sJ>`iyMzسs:ګ›qOzsI)gDq7΢-xJr0 ~?? }gͼ8c{T[邑jtx){6p| ooϪ|A6=qwgopo.&śo2j}jO}X3~ru@ J劔ʈNFK2xh 0:A/+"crxm +oԁ?n fU-ea(@6(yG}Xj*IŠnQ+$EaN8էRY@,oEw5+Z$k :d:7Z k#s827șA(/dKR ):Rl!=#esj )=Qn2|O;h'c' y(ƹL/kd?888* acqIYEEP)ͺ}0t 8r-kgUWf^)GdVer۠Ji ei- NQz=ÿ!JVVj\zk -MZ^V-ez )Тrg+"V(HxId^ wע^b#|.rP[,rUo=B@ ~멶Nyz~^=ևUϙTDž>_. hR[GHOQ$5)#f:{U͊4Gl|ǖV.ܻ,v7 dۋnFoeG nY|F`rE|n%zum'+_T*5m-n\i'>L+4vx~v5|X]Jn4 .Z} BXu}{|Yiz(o/ځ/+~yo_|{J2bZxj(P&sai-)z7OzZ{١^Y9m!i\L#e5 sq B1(^ Pk 4/hۛԁj] N엖4W.zNw=ѰWJ85(x\Gk(=҂W疱uĘ*!P2/A,C Wŀ!+*?7bY>Qktͅy ¯ NeVh8Y!+:} I{w2ĻAkM0ơ\ "/ sZ[b(ǀsV*γWÞȢ=OE-cri.G.OR?X֞:Hߋr+lq_12KCz]hL/S(Ro9vRK}e;V;Dv\*$"<ZC['\NW]E+.G`C.TчjCIq .8E/5 URIzN:\Zk7U֊:6wj0l|yVn3;83ڬO &R'7i|UMX?aӇan ,M8ٴ~}dQ1E*sf~KSH{A؋H/s"d,!bJLn3a@FE~yכ-U Ђ҂ 9(RN~j3o. 6GEr)YK]U) Q9&qjrr XPT?[s>x:&eЀB@,_Q7yє V8dy+J8TJ\B:l:1%HBKfP %@tva'i$&Iu1M=(S*;0DV`aSy1%>g`dgjs S*F@x}qAd:X5HhQo tڦOy (颼Qctnu?XKُLZ=j~sBbEr>4Sd{tȵV8:Fִ6o6|yM/6HѼ,R%uմ_,K ǬN.Ulx8w9wH&TrLR'D$uӎ!&yFXT8KKr1 *qYXT"xuZ $$NALtJ"—5EEX{ѐjjZ]7Ob 9Ɯd@&"&KP>Xڍ[867嫛sw;Z,_||M]_GJ+8|Ovh8xTܿNF 6(G3 -wKN H. 9ڟ0l\UrMPo ͠^YB,!Ga8OGu@B '+zʒ6CD X<-ҽ]'*PQҝ(]\U|v.])To]A@y nEvݭ)0He蚧VX.ٵu4%Lvv{ßo={O]T3}\_/wreTl|Vzݺ_NEӇL;[8w*!싅bXοNݢ}-]ъQig7 JwW.[ܹ@nت  ||j/uq΢Q<%;;M#kzNbq:k4n MVɴ[m@,Sp/[,BX'v&tUOSi ڭ Y44-[lv1-.wvTk-L,*M;}QJ3zk]og{J%ACimiJ%$~C~p)N[/[,BX'v&֝Њ=K n]p΢Q<{ M[,BX'v&^ дVзvݺEJk[e6@ 6# w~1ipCN&d&l`2cT=] oV\L/Y隣0L系kY\?8AfﯮxdK_{qz@Cij:3oo]g(EuήA돆rtIƿ^35?5Y<יٓ~7sFts޶ya9{XX8[.gﮯ!Z]Jz[`Yϟ 辧/F˹+lZffjA25 ys~W6ef˚%3A6ӄL0jQe׺o^ox/WV'P*ivh8ĝRvzs3*Ϋ9e).JvWcƇ`(u6MmQs[?<}m+1Q^ͽʿ~oYb1Wy,1DѵV툒$l2Q$yK' !Im[kyۄrҭB\R:U^BR:U^|$U@N:Z.tNlR}><õ~МZ|a'F{z"WJ@PO> t৚BsJ;Me N&4\v21? JuV7%`L/->mkF*^'Y@I+>)*t:IE!noTĀ 2hZdpz~(8[/ݳY4"u7{hGןv_UGq'm+AT;j4ˌd63$'Xcny^֙x܏'C#'k'AKiDTx3sX tz [(CvpINU#𣶪ARPgDs1Li2 jpP H#$Nd"S"NG䋝5P  ;i2c K558;ƆǺ*ˁ2d.f= '3爕Ts"˩iW*"%!sc˭FP80G=XtrC-Ë2# {8mIgط.8tϾ}D[cS:IѤ`%7Rg]D ھlidRsȹ*"b#PRp>;s7~\g#JØL| M`sӧGSRDz)5)扨y?(5/"OQQJLi0zAk!ɨx,MFHIX0Εk\l9:s߾ݖΐĈ"&vM4_g%^,"雊KgKSbjgX?T#I@ݗ)~t_K8_Q(VZ.cGĔt#{KPP0ѯ9Ĕ[L?y!h(S9$QJ^HоK((A(H2]A\L 2fJ"c9ArarVrlj.6k/d#KT!~ܻO>œV&c<3T̐Ђ:iKohӱ"X#pIMİӑɷp 5~H ^n$|VtNGש.e&L6^~Q ݷظ^WPuGO^7Ō. /ms̭. % t^D{>r UV ˲oiI"VеN!xŭZoI(xx؛d U> #5k-@l#MΤ}ZrfiGMSs.1nrZAݗ} 4^ğOd|D+a\p䛈+Dz_'6wt;8(,`=H$%kq_ֺ-Fb!]gs1ILJgV|XcCVCp‰@dpO{ɉ/kN iUVm:?iojg{n!bN&᪙|T?WV1VWy*W jNc"+b󑷭~'ʍ!mgC c,04C8%%Jۇ^ #tj,zX}\0R 7#[fuZL,R#jb&޴X揣#L$RH-OkKl\lZ˞kB]Ÿ}Eֈ"ݍ"(hDMCK&^ D5LH̡fRva[OB'ȕF &>%Z&Tv`еpMBpDO=%Mջ{5bI;xq DFeL!x&+51IRdEIR]^vpIk-n#r6e,c~Ax(\$=o5n?ϢDNim$.LἉg&L=,f&H0ݯlMd'DYH6WUMok(jEVL5BVCV: a.yYIZbaˢ' 6Oǩ== RQȸ,s>}(.&2e5S 7X!j.1`$|b4CkHI 8RyvYO`G2S),9xL1W LEC<_+u~*Ѫ'^Aa[!d#5Ot8'"hB{<넩xlr)Ԋٮ6.XZ?DKTMyʅxdQ&% Z$i 7J!t,nkZ7oZkxe0[4~ӚJ\:9Ld\,?'8)vhiϝsB̾om-:$e } 5qPB6eIJG u5SQq1.4L$tUu[ZOF"$>Ndڂ0š0K%m\0"h*pLqzeQZJAGX7ȸj|e6ʑw憨D3/^vc"]I;D`(z3]J=b JJw<]I]i.)%+UlyJy+Ц-l?;) CK}5B ebԴ8֥LU$A }:PcbN0Ti7>խ9XlXKBrא-}'+K_2p9+jDUKd‚I 'w%"ŚbZx 8_7Ju7VHyz_"D4H$3\DƜyZXl+*ܲpX۟}t<) #sͲuG&bmq07M^8$>CQwbV;HZ]]HRa^ IJ#f+E 8IHF+dѸ| Q,E$JT-0 <"s.~48#0m')!1_T(.* co `(.?sG7_3Ww(8WYyUN"7.)åx{9O\at^{w<E:;Y6Xي,oFKbm.TZcCBo(HYT`433!xr6Z Kx.b!teUW$E~6c|04R``~os#|<6kHyGdb [UQdE2*U7[Uև*xO[]`s^MC.4"1Mt)s2|5z37}5T L9|yZoS o:p*IoS+~( \~E/ia^p].0<ۄa[p]uyuke #h:\ #W#S55#}Bo2]+\xŋiыWۿ ga8zx1|勍PA Q8_8)!q1nln> puZ2}{SS~ya&~JsfH37RYAc-S% QrVJ(rЂpn|]{5b]0j+#L83.2O56Q\ID14 !2esArpyLh}ybwË^hTe_}f=K `\uR̄q3(ZnYFȢN`F/Rt\ެd:A$74zT(I#{y CS oSRZ |,SCf v寙 :NwQ'G?Fmƃ =v!y-^KzVXKk7g"`nAO!V \#xXE}`sj;o5{9ckwjvb?wt\u6N; dğ^h&ya(4?9}Q#ӚLN1J xrPc%J'CCc0JiTcdE ЕA3m #E}0«vq6FY6&E㳓e &؆>3ޏxSWDd#3p |{mF?y?}wkכ6ShQwlgwދ^ޞCg?ݗo^ްz럯{o7ڛ`7h1οQ^i|Wܝv;T{Fv?_΋GfQ8:}ί"mAŤ./?4w4jF9fL(y3t'tAb>Wh0p;39xfF?TϦ'YxTcsw0:>1nzac[j{z1*^'|n6cS,V \dT>%#?R`mZk6;8x`FΕs#ܘu W)sn,?-pz I?J7ir2fR5`fTIi̹KA:d"`C!g?6EE !qiR91LطVOk|OVO^zZzZzg,nzԘ3SEW@aHf( pGГzVOvT͔&eͤ90iHa(';B]43*tQ)Tbc%R"[|jUa$hu[|_/_W޾M,*2ȴ,caM%ƒ k'e{jȺ_r%#_I HBI_MT3Ȓ%߸əX>dғRz^1=Yq&`V3%GqGƨNx-<`ZXJ@Ewmͺ/>Ĭ_-`m<; sCީx}J+'[t((chO1!t2c1N|kO|^5[C5[C5XݾFav>˔S+18 D \= xě@xH"O)5`-&Z@Q`D'96c$|.iq`'8k"}[G^BBM-\;g!xfGHaMpP`BXX)ÊZʟ`VD_t/^\`|vQ!([d!eֆFDsw S v0c,:rc01.eGSlj~.U]-IS0痾PN [ov{~z>B 35s? ngj_a%0*}H{u\>][x])J~Hf8Z]Ejt7ht7_/KSV?(Wnz~čw7X+a勷_ҧ{k̻Ưk\e _[/FoCV`/"j__4g`rƝMTKbcYd jpfE8o.-Z-^]tHR}Yl1S4,[}@c,NJ+y0֮iQ i>+0`.%^~x6y[=3v?ZG'wQU,jﺪ`c[\Tl]T,cS  6RMy&7^)1;SU 3n}զR29Rx-$P+kfMd /%ཱ[O܉3ZsyPtWB$)4%l!\ڲ>MUfD*+q;nC唤ߒQ)ǘi$^:KEr9oBEWtKc(Y/L] }5-ݦx5YgpIƙK$^B! Wɂ R utG5)h82MI`R&>yRqAF"hTRNµsp HQDta2ȄlSւ:0#\dxQ$tz'}&]DqfA%LJ&\'G[]F}~rzʝj?ŴQR+oQu;D6ʎ!S["ymxlEN5q"im HK33WzjYI7y(-JwqJFX/)Q7Ig[L2 ݮ_꬙7Jڪ4JjmYӵekD V? k,j*O*OާMܵ[hs΁n[t7)B@!2IA˘2-r\0]l!}ɲ-9'? xN?K,u_?x ]bhtDͶ鱻:ciBsC0T.c0"{ӹɌIjG+<0\@!E^bkO ݢSg@V裨0`u,%{+7 >ЪpQ%.LFA(%IkF""pFL:*ID:m5$pOt,iÐ0#g!rȊѠY[Ck՝=A^7Ϛ9b^`V֢ͨUD\JU 2\UxL#J8IC)5Q:)"b D;ri @FuI8}wۂIU6@&< 3hEysp"cA F)bĹ#piwLU;h)$mҜV1i\&at>{蹳ZL!en)U)IK[ELzĒAD,%]$"A^%tAD F<#ZarQ9uCcRU0st]M7vFtgF$/0JJɒI~av |HU|m 1"6uÈւ& ]PTN@3Saف6ʷOD- ǘ(߽[5~Pt>ďើvܬklUAOt 2b=dƲ?bvֹ-f<8^+qjcC+ܡaI/`D1V0h8JDRѭ݉cп1iZʃvcgQ gݓw@!m 6E-V?(D@z=>ɣ٧)1)(vd7%rnG15z&Be.Hi5v^>aS4Cr \!FyX1Ġb)(ۍ/:<0ЉZBZs;#Q|o_Ԃ! q=Wh>W'/(0`]ʱ$cDw.@Qɥ5l߼[EDyܻ уb[k6j{(^q]a ώ˕ꆌ̨ \Qbv22jzk1< k lq5 wrj,G18!՘ǕSS{2pV7~E-2h~v/zI5D9زye@I1$q;+@8?>қjzBT3ޕi8;V2WL )ϝ|4H6;BwFWi{:{֙])aFIdV~RƐ0<UG}!ayrXp@Tk6|jki%W`{;_h-ىv4CP CY^ ߏ{AAȻC3U_S6Cu p|G=-^V- _CE/mUޔ*oJ7ʛ*/ y]YR@#TN(V:e)GpYQJ(*zF*;NٯӍJF) !)6M^Xk~\&=G|]7_L|z|Ғ1:5gJSV$T>+T.:{f# s13MQqr.@7!ql.aɘ\ZU[y0i^Ņ}XNLwo$=Mp% C%Okcgr4d/TTt,}>D4xIdt'dKhY h IbQDˣ QJp&hںhL'ᰖ܋l)jP@Svc@S*M&h!"|2xY(.2 ~, ) ؟R }Żt"1Lw`@| Xh "BK6x\]r9[ a}Y?`L8j.!*˷Փ/@X ٫ZO,žBc=gRk@``ꚒQJ j \ּžTUiGE_}~{_h&E3rC5߾(o+lb ?Ic̟z k!\WPe%&A^c$xWrWN ڋ)|]|__hwwC{Α(Շ}qEvj1}qqpbvt4 mT ƾ}7 #TCamdbp^/##R/*$2z=cEOzV"> VG5N@7 (D睉R:p,!r)BI1t!^CcXh8 i$)'X |^_2$aV @jw8p]u߾}ѷ/z?}ssM=$'*Ni;:CB䚯{K77 Y+Ex4O3pTVeA%0Qdqj4AO%ޥb2kXbe<`Ҙ̊ 7t;Kk jnd&<3P p"!Q)NjI0JX5^`PAP)0Z£YəGA^afyu8ܥ?^ܧo~ݔC rȯş!& tq{Sp͂W7ȕN4ϯ7f$>7{bO'-soWvJ qe*)M<4:D4ȣnOd p](gՈ aXW27!/I Z貧cf#Lc;:~svxq -*Z%S`,$rfL*8S%rie"EqI%=)DžWCg!CS R?}V&jD  lЀhw3^[8tLp:r3ID,me5mۀg>u,籦^a X_cuG==ScbiHjkJ)(tֹ&-|]khs6t/dKq̇bKq,@tc|ȉkY&'E(ϼK9__w՛Ӽɣ}G]sܗV +;W%iUT 5qKVQ"\LpkpZ=^]еgv{ AGXWI'kG5rcɉ\ǔ>,5^avrYP5c2n93= dD:cJg/ys-36-(&VS /,9I =+ KzNն\8.?{XF![dg)wW7~)*Ω̰vܵhr}Σ:J*i6>+kG뫁r,A 84]:t&Cn>J.ǜ/QPQޔTb*)Ơc9΄7f<좜pw̯5$PDʆ>t̠C$H)5~ʩp(>(힭Q9g%^υR?yA}=$dq1*7`84&N҂-x& fTGxmlw DNvbt>I~NYxF3.VF,r(a' (lj =aF&-HPAǰUM 67 (|ls kauHjM@F3Svw$OlDJ7).![Nd1!^-$uMK}7[&3jc^q=꥓0£* gc_@ ¸ipr#CrRfJZb)tXUZN@IʵA;骞/5:Ӏߧ(q%PNu\.>ƅgZNQ-2UO6 RQL0xrKuvR|Aګ%童WYPmPQ1zGCnt:`!n<#rSo7n_~>7Qrk- q^P׮j%/ygzF2M}y{._Gzun/nxcsw}w%NIU^s C)O_jKo]g92ADNtwl|h˼zvPA+mH,&<"/;;cx3knIF:(5.Um$%X_DdYq*Z\K Q=+Ke]D4X-dzVu%X3C~FmczLZ_rqqnb6=eZzˆٍ#J8;n:`;Gyʹ'0EkkR:WA2G*2ĴUFUؿ_(="2;ap+e()x/ ftCn 1YglJ >pUDnm`a/~;͐]u< ] }SaXs(i(X,9@=xq:^>'B qN ~,uVPNT䉮dbQDW1wmp=I0ʹ\ AY$e^6Mry7Z&]uC)V!*WVHeK5K$YU Ԕ̊ZjQŇde1>;/d%P { ")O) IB@\\@2T}h)W=Y"z5J:rE򹥇e>앯(_3C4RdX~kLA Zt^U2v?7D3b6[ncIލ7;nU/s6hѭ)9^]ppfs2Gdy(8Flmxi5ot- {C^dZerhr(xR+25 W$^V+,P 5юk!.dw2h/ZOJc6.pM'Ϟ \ӳ9KjHk7MfJ薔vƒ(SB{f{~}@wi\ Ee嘇Pirߋ"\L Bfvi,01"3:z#8qi/]A;#鈞+Z j=t6nroo䩦MO4doٴxhCeo෯Sh*/ X{k8 /Hi 9(ozswJlw_8Uo&:زKMC}-\#'.WpY% ]R/״|߶ʨ>KҾh0QJdP [fwkLWxm9s=k6m`T^)MCfoWP΁߇xCՄ;okD.YVTQiUL|t}}Ӱ,';)ѬT0|7`fT'[ 6}tfc1:\x Q(tszUB2#w b&BP2]kܝG6m&wսn_[L'ht]fs(|I˻Pr:h nUݍ6)S:A;bb&{U$R`?lYnhrDbsUp8KV3ꢝ .UMOEugړKЦ>~D;J 5iZo,\Px yH@t:R',r 6qUI8/ 44W^xBrg2Xed:sY ~>:],.G#68v;jt8xlHWӊ'%-n~*g^K>Zذieq?ڃN9z(%̙CĥWGx׈{VVc,s]UGn9,0]mzqsus$昖rwl'$?i>=+\'՚Pgk5Eҡѭ ceTrAm;a3? Lqܯ'FIred$鄞glF(F7J7/EIG4mvzH竬TqU62+#T{ave3` 5YoR `8I:.͵K{욢F/MQm |ϐٱӯoXt9 A)wJ)tR-MTK:77يNߤkd;^(Sn\fP|A@&.'yi֋\vW@Ѳ4q0v\Ok%H_W;٘M*mXJܢ'Cl- "MxEɚ(|4։, ۚf5ڋ; O3@pU˹ܽk,8xYSx> NfKC7Wэ`oXa/dK^Hʴ4koޗ-IiCQWr^K@%=C0^4W{>0c]DNu+J~^d}:LT– (V*"d8)byDV:g# dX5~!#BWn~ok/0ʘJYTT(bsf=$uij3hɁ2c,7ks7.<[| !{:K Lb)HǍ5i*B(sq@= 0f쭛kB P52E=StFGѷ}qi')޸fl  N@e\] iԘ*~z"hi[у%Us߆nJ8W-`IL׸<$l@V@=4); hԆ\s~00u: x]ix?ѫtUsl$,ܤ,X癩tPdzP/#hŎTЇѾc Ogg'(^ծ[êZđ.2iN jdDu |zPߝ xd)*y+CŘQ) Fݲ}ٲL~~Q[Yc.ŲPp{C MZGQEvfeƋrC0]}e#$CWgޖŲ̰AɣѱYbiZJsQjf02?Ԉ73XY6Y^(c6@Ph1!$}xV@_(B!XFbHBBƅV[r^O9-*SK=5c^zgJÚ4oP QH .+YXU.{ϗyj ۧ*B_:g~/pJD< -]/Xo}kyj N]'}6h|,',,/wC%ZEYdדMZT )J0Wg D-y!n{Kyxݻ[)lJ)2~*@=ZvRpH#XSul0S#o~^"jvO'1̂li>kE:jE֢GrVi#4`2)&U5,AG)m72 (;t-`gߧI:0Alǡ>i§V=X`d|i\POolT T/ hH*nbl@ #!6{?vr: J7,ĺ%T=B?eysVNkZf>7." h SO4: u1GURy2`|b5$;ZT!YYPǍmIp~!. vPfl1q1Z$N>/hE/u؛AA be[jj5Y]~gop{[BCvfh+.߽>̧T^RoKe:iDϝlK[jU%4ZҸM)TiD)#8ϝ'm$bj|Rpe-f(!3f5ҫQtCI$?Mӳ2KnG˪0R*9?~,d@.Mʢ̇b6%ڰ=?.Is+ \HnuMuXuC#Y;zozT-h4:Fbs0n~2_X|=H,Ro2[=ۻa j⫂&> Ү-+9{iڍd:28]k*f [sEqkv3>&7.\mӃEԴv[boFF:{LeۈcP"ј&l_1Iu]=GŘb8jyşń]9f 8RuXXU#K_ YB8U!zQKw^^WRk9n'Uvv$Nswg̑D16I{ѻ'.6?vҭ@-6cll~g\n xY ]FJJ4t."Iŗ3aU{.Mŋ eP±]qh ]U!EYj^Oi:^SGi: @B$ȅ߫S0P:㈮JKY)rAR8ВB 11482ms (06:47:38.325) Jan 27 06:47:38 crc kubenswrapper[4729]: Trace[881094905]: [11.482616049s] [11.482616049s] END Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.325420 4729 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.326035 4729 trace.go:236] Trace[358858575]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 06:47:25.719) (total time: 12606ms): Jan 27 06:47:38 crc kubenswrapper[4729]: Trace[358858575]: ---"Objects listed" error: 12606ms (06:47:38.325) Jan 27 06:47:38 crc kubenswrapper[4729]: Trace[358858575]: [12.606445417s] [12.606445417s] END Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.326085 4729 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.326197 4729 trace.go:236] Trace[546094204]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 06:47:24.427) (total time: 13898ms): Jan 27 06:47:38 crc kubenswrapper[4729]: Trace[546094204]: ---"Objects listed" error: 13898ms (06:47:38.326) Jan 27 06:47:38 crc kubenswrapper[4729]: Trace[546094204]: [13.898153233s] [13.898153233s] END Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.326235 4729 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.326205 4729 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.353637 4729 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.380547 4729 csr.go:261] certificate signing request csr-ljnqq is approved, waiting to be issued Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.387284 4729 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33358->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.387375 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33358->192.168.126.11:17697: read: connection reset by peer" Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.387795 4729 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.387822 4729 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33364->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.387857 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33364->192.168.126.11:17697: read: connection reset by peer" Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.387851 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.395303 4729 csr.go:257] certificate signing request csr-ljnqq is issued Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.531594 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.532176 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.533631 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066" exitCode=255 Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.533683 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066"} Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.533755 4729 scope.go:117] "RemoveContainer" containerID="f0c9bacdcbbbd1f3d15b0ae9edf9cf7fcd68d42856765d2fdcb2f2feed6fd83b" Jan 27 06:47:38 crc kubenswrapper[4729]: I0127 06:47:38.551565 4729 scope.go:117] "RemoveContainer" containerID="65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066" Jan 27 06:47:38 crc kubenswrapper[4729]: E0127 06:47:38.552051 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.176839 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.178224 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.178260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.178270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.178401 4729 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.189803 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.198062 4729 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.198327 4729 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.199708 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.199755 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.199769 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.199791 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.199804 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.208474 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.215131 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.215910 4729 apiserver.go:52] "Watching apiserver" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.218259 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.218293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.218303 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.218322 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.218332 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.221429 4729 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.221805 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.222195 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.222333 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.222601 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.222698 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.222809 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.222920 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.223308 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.223433 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.223564 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.227793 4729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.228799 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.229903 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.230243 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.230404 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.230514 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.230595 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.230727 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.230681 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.230903 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231131 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231208 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231347 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231425 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231496 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231578 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231647 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231719 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231789 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231865 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231938 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231192 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231287 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231477 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231561 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231644 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231690 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232101 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231715 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231831 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231872 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.231986 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232008 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232221 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232233 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232263 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232283 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232281 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232333 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232359 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232425 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232444 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232460 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232479 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232496 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232514 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232536 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232557 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232333 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232488 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232500 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232654 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232661 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232663 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.232766 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:47:39.732745502 +0000 UTC m=+24.799866765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232782 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232787 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232802 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232807 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232824 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232852 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232870 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232888 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232905 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232924 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232965 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232970 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.232983 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233006 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233019 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233023 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233112 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233149 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233152 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233170 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233194 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233215 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233234 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233251 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233270 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233289 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233305 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233322 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233338 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233356 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233403 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233425 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233449 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233471 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233494 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233515 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233532 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233546 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233562 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233580 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233599 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233615 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233635 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233651 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233668 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233686 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233705 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233726 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233746 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233764 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233781 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233800 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233845 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233862 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233879 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233901 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233920 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233936 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233954 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233971 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233989 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234007 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234025 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234042 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234098 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234118 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234135 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234154 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234170 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234185 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234201 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234218 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234235 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234251 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234269 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234285 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234301 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234319 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234337 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234356 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234374 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234392 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234410 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234428 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234450 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234470 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234487 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234504 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234525 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234545 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234562 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234597 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234618 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234635 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234654 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234670 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234687 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234704 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234719 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234736 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234755 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234772 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234789 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234807 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234824 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234839 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234855 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234872 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234890 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234906 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234924 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234939 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234956 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234975 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234992 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235011 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235027 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235043 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235063 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235494 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235514 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235531 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235550 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235566 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235581 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235597 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235617 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235633 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235651 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235668 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235685 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235702 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235731 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235753 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235773 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235796 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235814 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235830 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235847 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235868 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235885 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235905 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235927 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235944 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235962 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235979 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235995 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236051 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236086 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236117 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236143 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236161 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236178 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236459 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236533 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236552 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236572 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236591 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236607 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236626 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236646 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236666 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236684 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236702 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236721 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236740 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236758 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236775 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236794 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236834 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236861 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236886 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236903 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236922 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236939 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236959 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236979 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236998 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237017 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237036 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237053 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237085 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237106 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237168 4729 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237180 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237192 4729 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237204 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237214 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237225 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237234 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237245 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237255 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237266 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237275 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237287 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237296 4729 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237306 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237317 4729 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237328 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237339 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237349 4729 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237360 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237370 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237401 4729 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237414 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237428 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237440 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237456 4729 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233214 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233265 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233451 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233450 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233601 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.233608 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234013 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234020 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234198 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234213 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234412 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234593 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234606 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234769 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234862 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234932 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.234982 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235121 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235170 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235322 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235345 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235372 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235591 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235648 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235890 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.235896 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236059 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236317 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236350 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236622 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236645 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236703 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.236819 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237124 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237236 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237544 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237808 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237824 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.238168 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.237979 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.238662 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.238781 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.238791 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.241171 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.242488 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.243172 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.243518 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.243745 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.244126 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.244322 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.244645 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.244854 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.245297 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.245358 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.245595 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.246198 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.246339 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.246397 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.246527 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.246792 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.246870 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.247013 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.247035 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.247038 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.247153 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.247643 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.247702 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:40:18.181384916 +0000 UTC Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.247856 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.247872 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.248502 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.248684 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.248752 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.248888 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.249207 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.269086 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.269634 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.269935 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.270167 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.270228 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.271610 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.271756 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.273310 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.273456 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.275148 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.277011 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.278021 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.278758 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.279333 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.280383 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.281427 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.281769 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.282859 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.283116 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.283430 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.283494 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.284054 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.284237 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.284404 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.284809 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.286735 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.286752 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.287268 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.287318 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.288229 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.288447 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.288492 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.288596 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.288633 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.288825 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.288940 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.289107 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.289116 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.289451 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.289325 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.291152 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.291466 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.291744 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.291917 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.292048 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.292136 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.292530 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.292798 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.292806 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.293177 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.293491 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.293651 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.293862 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.294050 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.294636 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.295033 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.295242 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.295840 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.296012 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.296347 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.296352 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.296413 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.296683 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.296930 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.296969 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.297221 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.297231 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.297463 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.297696 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.297793 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:39.797767581 +0000 UTC m=+24.864888844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.297901 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.297959 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:39.797949427 +0000 UTC m=+24.865070690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.297980 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.298267 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.298353 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.298629 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.298985 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.299269 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.299847 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.300581 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.300907 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.301428 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.302404 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.314742 4729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.318010 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.320628 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.321000 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.321407 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.321595 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.321783 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.321901 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.321991 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.322045 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.299927 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.322088 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.322157 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.322221 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.322291 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.322351 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.322486 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.322541 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.322696 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.322831 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.323041 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.323174 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.323879 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.327755 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.327857 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.328760 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.331234 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.332453 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.332568 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.336501 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.336795 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.338831 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.338921 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.338979 4729 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.338996 4729 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339022 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339045 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339055 4729 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339064 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339090 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339100 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339110 4729 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339119 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339130 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339139 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339162 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339173 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339182 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339190 4729 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339200 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339208 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339217 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339239 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339248 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339257 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339265 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339274 4729 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339282 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339290 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339312 4729 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339323 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339334 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339343 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339353 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339361 4729 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339370 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339393 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339402 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339410 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339420 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339429 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339439 4729 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339448 4729 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339472 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339482 4729 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339492 4729 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339502 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339511 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339521 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339545 4729 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339554 4729 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339563 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339572 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339583 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339592 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339600 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339622 4729 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339631 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339642 4729 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339650 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339659 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339668 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339676 4729 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339700 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339709 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339718 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339726 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339738 4729 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339747 4729 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339756 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339778 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339787 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339796 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339805 4729 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339814 4729 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339823 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339831 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339853 4729 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339862 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339873 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339882 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339891 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339901 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339909 4729 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339933 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339941 4729 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339950 4729 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339959 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339968 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339977 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.339986 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340007 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340017 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340025 4729 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340034 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340043 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340051 4729 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340059 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340097 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340107 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340116 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340126 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340134 4729 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340143 4729 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340155 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340178 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340187 4729 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340197 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340206 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340215 4729 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340224 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340233 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340256 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340266 4729 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340277 4729 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340286 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340295 4729 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340304 4729 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340313 4729 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340336 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340345 4729 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340354 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340363 4729 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340371 4729 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340380 4729 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340389 4729 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340417 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340427 4729 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340435 4729 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340444 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340452 4729 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340461 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340469 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340505 4729 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340513 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340523 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340532 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340542 4729 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340550 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340574 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340583 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340593 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340602 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340610 4729 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340618 4729 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340627 4729 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340651 4729 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340660 4729 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340670 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340678 4729 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340687 4729 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340695 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340704 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340712 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340734 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340743 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340751 4729 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340760 4729 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340769 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340779 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340803 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340813 4729 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340821 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340830 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340838 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340848 4729 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340856 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340864 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340887 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340896 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340932 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.340992 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.344196 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.349895 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.349933 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.349943 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.349957 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.349969 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.359011 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.359052 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.359071 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.359175 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:39.859150758 +0000 UTC m=+24.926272021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.371230 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.372659 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.372700 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.372714 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.372783 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:39.872756728 +0000 UTC m=+24.939877991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.378625 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.388875 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.391705 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.392897 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.399161 4729 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 06:42:38 +0000 UTC, rotation deadline is 2026-11-18 00:35:28.406751172 +0000 UTC Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.399219 4729 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7073h47m49.007534597s for next certificate rotation Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.401686 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.425437 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.425475 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.425485 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.425505 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.425516 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.441695 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.441728 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.456632 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.457723 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.467664 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.467705 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.467715 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.467732 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.467746 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.487884 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.492448 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.492574 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.494041 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.494061 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.494099 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.494116 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.494125 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.509254 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.528581 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.537455 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.540052 4729 scope.go:117] "RemoveContainer" containerID="65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.540233 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.548625 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.550787 4729 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.565603 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0c9bacdcbbbd1f3d15b0ae9edf9cf7fcd68d42856765d2fdcb2f2feed6fd83b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:21Z\\\",\\\"message\\\":\\\"W0127 06:47:20.724792 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 06:47:20.725763 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769496440 cert, and key in /tmp/serving-cert-4066132986/serving-signer.crt, /tmp/serving-cert-4066132986/serving-signer.key\\\\nI0127 06:47:21.054403 1 observer_polling.go:159] Starting file observer\\\\nW0127 06:47:21.057477 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 06:47:21.057714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:21.058823 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4066132986/tls.crt::/tmp/serving-cert-4066132986/tls.key\\\\\\\"\\\\nF0127 06:47:21.272934 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.576729 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.583171 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.595876 4729 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 06:47:39 crc kubenswrapper[4729]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 27 06:47:39 crc kubenswrapper[4729]: set -o allexport Jan 27 06:47:39 crc kubenswrapper[4729]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 27 06:47:39 crc kubenswrapper[4729]: source /etc/kubernetes/apiserver-url.env Jan 27 06:47:39 crc kubenswrapper[4729]: else Jan 27 06:47:39 crc kubenswrapper[4729]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 27 06:47:39 crc kubenswrapper[4729]: exit 1 Jan 27 06:47:39 crc kubenswrapper[4729]: fi Jan 27 06:47:39 crc kubenswrapper[4729]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 27 06:47:39 crc kubenswrapper[4729]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 06:47:39 crc kubenswrapper[4729]: > logger="UnhandledError" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.596019 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.596093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.596110 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.596131 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.596144 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.597091 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.599344 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.611793 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.623935 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.632181 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.637997 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: W0127 06:47:39.643381 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-47a0f775ae6b802192a2e863cf7a79c86ff13daa8d9606a21398575c9937b5cf WatchSource:0}: Error finding container 47a0f775ae6b802192a2e863cf7a79c86ff13daa8d9606a21398575c9937b5cf: Status 404 returned error can't find the container with id 47a0f775ae6b802192a2e863cf7a79c86ff13daa8d9606a21398575c9937b5cf Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.646925 4729 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 06:47:39 crc kubenswrapper[4729]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 27 06:47:39 crc kubenswrapper[4729]: if [[ -f "/env/_master" ]]; then Jan 27 06:47:39 crc kubenswrapper[4729]: set -o allexport Jan 27 06:47:39 crc kubenswrapper[4729]: source "/env/_master" Jan 27 06:47:39 crc kubenswrapper[4729]: set +o allexport Jan 27 06:47:39 crc kubenswrapper[4729]: fi Jan 27 06:47:39 crc kubenswrapper[4729]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 27 06:47:39 crc kubenswrapper[4729]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 27 06:47:39 crc kubenswrapper[4729]: ho_enable="--enable-hybrid-overlay" Jan 27 06:47:39 crc kubenswrapper[4729]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 27 06:47:39 crc kubenswrapper[4729]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 27 06:47:39 crc kubenswrapper[4729]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 27 06:47:39 crc kubenswrapper[4729]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 27 06:47:39 crc kubenswrapper[4729]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 27 06:47:39 crc kubenswrapper[4729]: --webhook-host=127.0.0.1 \ Jan 27 06:47:39 crc kubenswrapper[4729]: --webhook-port=9743 \ Jan 27 06:47:39 crc kubenswrapper[4729]: ${ho_enable} \ Jan 27 06:47:39 crc kubenswrapper[4729]: --enable-interconnect \ Jan 27 06:47:39 crc kubenswrapper[4729]: --disable-approver \ Jan 27 06:47:39 crc kubenswrapper[4729]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 27 06:47:39 crc kubenswrapper[4729]: --wait-for-kubernetes-api=200s \ Jan 27 06:47:39 crc kubenswrapper[4729]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 27 06:47:39 crc kubenswrapper[4729]: --loglevel="${LOGLEVEL}" Jan 27 06:47:39 crc kubenswrapper[4729]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 06:47:39 crc kubenswrapper[4729]: > logger="UnhandledError" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.649186 4729 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 06:47:39 crc kubenswrapper[4729]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 27 06:47:39 crc kubenswrapper[4729]: if [[ -f "/env/_master" ]]; then Jan 27 06:47:39 crc kubenswrapper[4729]: set -o allexport Jan 27 06:47:39 crc kubenswrapper[4729]: source "/env/_master" Jan 27 06:47:39 crc kubenswrapper[4729]: set +o allexport Jan 27 06:47:39 crc kubenswrapper[4729]: fi Jan 27 06:47:39 crc kubenswrapper[4729]: Jan 27 06:47:39 crc kubenswrapper[4729]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 27 06:47:39 crc kubenswrapper[4729]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 27 06:47:39 crc kubenswrapper[4729]: --disable-webhook \ Jan 27 06:47:39 crc kubenswrapper[4729]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 27 06:47:39 crc kubenswrapper[4729]: --loglevel="${LOGLEVEL}" Jan 27 06:47:39 crc kubenswrapper[4729]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 06:47:39 crc kubenswrapper[4729]: > logger="UnhandledError" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.650904 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.651996 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.661414 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: W0127 06:47:39.663862 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-c66f44c647caff574b5bbf5f424cada7f1eeac815bc6ad58f6a02793c40a68f7 WatchSource:0}: Error finding container c66f44c647caff574b5bbf5f424cada7f1eeac815bc6ad58f6a02793c40a68f7: Status 404 returned error can't find the container with id c66f44c647caff574b5bbf5f424cada7f1eeac815bc6ad58f6a02793c40a68f7 Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.666769 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.668286 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.686898 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.699021 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.699064 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.699088 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.699109 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.699120 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.699837 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.744516 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.744646 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:47:40.744623639 +0000 UTC m=+25.811744902 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.761555 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-b6f5d"] Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.762142 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-b6f5d" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.768297 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.768518 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.772364 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.777244 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.786001 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.801419 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.801503 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.801560 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.801572 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.801593 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.801605 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.812749 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.832171 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.841938 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.845622 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9f924f11-f70f-436a-a7e5-fb7d0feeabc4-hosts-file\") pod \"node-resolver-b6f5d\" (UID: \"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\") " pod="openshift-dns/node-resolver-b6f5d" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.845664 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.845700 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.845730 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpjkt\" (UniqueName: \"kubernetes.io/projected/9f924f11-f70f-436a-a7e5-fb7d0feeabc4-kube-api-access-mpjkt\") pod \"node-resolver-b6f5d\" (UID: \"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\") " pod="openshift-dns/node-resolver-b6f5d" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.845798 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.845865 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:40.845846337 +0000 UTC m=+25.912967600 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.845892 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.846019 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:40.845994722 +0000 UTC m=+25.913115985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.855982 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.874456 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.888716 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.903530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.903559 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.903568 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.903583 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.903593 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.946461 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.946522 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9f924f11-f70f-436a-a7e5-fb7d0feeabc4-hosts-file\") pod \"node-resolver-b6f5d\" (UID: \"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\") " pod="openshift-dns/node-resolver-b6f5d" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.946584 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.946608 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpjkt\" (UniqueName: \"kubernetes.io/projected/9f924f11-f70f-436a-a7e5-fb7d0feeabc4-kube-api-access-mpjkt\") pod \"node-resolver-b6f5d\" (UID: \"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\") " pod="openshift-dns/node-resolver-b6f5d" Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.946704 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9f924f11-f70f-436a-a7e5-fb7d0feeabc4-hosts-file\") pod \"node-resolver-b6f5d\" (UID: \"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\") " pod="openshift-dns/node-resolver-b6f5d" Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.946762 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.946796 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.946810 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.946876 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:40.946853518 +0000 UTC m=+26.013974781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.946938 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.946948 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.946959 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:39 crc kubenswrapper[4729]: E0127 06:47:39.946986 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:40.946979652 +0000 UTC m=+26.014100915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:39 crc kubenswrapper[4729]: I0127 06:47:39.963414 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpjkt\" (UniqueName: \"kubernetes.io/projected/9f924f11-f70f-436a-a7e5-fb7d0feeabc4-kube-api-access-mpjkt\") pod \"node-resolver-b6f5d\" (UID: \"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\") " pod="openshift-dns/node-resolver-b6f5d" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.005978 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.006018 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.006027 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.006047 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.006059 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.074528 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-b6f5d" Jan 27 06:47:40 crc kubenswrapper[4729]: W0127 06:47:40.085639 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f924f11_f70f_436a_a7e5_fb7d0feeabc4.slice/crio-df3492bbba334b360ca1c8b373b2936bd311c18ee8de1324511950934b3968df WatchSource:0}: Error finding container df3492bbba334b360ca1c8b373b2936bd311c18ee8de1324511950934b3968df: Status 404 returned error can't find the container with id df3492bbba334b360ca1c8b373b2936bd311c18ee8de1324511950934b3968df Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.087767 4729 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 06:47:40 crc kubenswrapper[4729]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Jan 27 06:47:40 crc kubenswrapper[4729]: set -uo pipefail Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 27 06:47:40 crc kubenswrapper[4729]: HOSTS_FILE="/etc/hosts" Jan 27 06:47:40 crc kubenswrapper[4729]: TEMP_FILE="/etc/hosts.tmp" Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: # Make a temporary file with the old hosts file's attributes. Jan 27 06:47:40 crc kubenswrapper[4729]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 27 06:47:40 crc kubenswrapper[4729]: echo "Failed to preserve hosts file. Exiting." Jan 27 06:47:40 crc kubenswrapper[4729]: exit 1 Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: while true; do Jan 27 06:47:40 crc kubenswrapper[4729]: declare -A svc_ips Jan 27 06:47:40 crc kubenswrapper[4729]: for svc in "${services[@]}"; do Jan 27 06:47:40 crc kubenswrapper[4729]: # Fetch service IP from cluster dns if present. We make several tries Jan 27 06:47:40 crc kubenswrapper[4729]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 27 06:47:40 crc kubenswrapper[4729]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 27 06:47:40 crc kubenswrapper[4729]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 27 06:47:40 crc kubenswrapper[4729]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 06:47:40 crc kubenswrapper[4729]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 06:47:40 crc kubenswrapper[4729]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 06:47:40 crc kubenswrapper[4729]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 27 06:47:40 crc kubenswrapper[4729]: for i in ${!cmds[*]} Jan 27 06:47:40 crc kubenswrapper[4729]: do Jan 27 06:47:40 crc kubenswrapper[4729]: ips=($(eval "${cmds[i]}")) Jan 27 06:47:40 crc kubenswrapper[4729]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 27 06:47:40 crc kubenswrapper[4729]: svc_ips["${svc}"]="${ips[@]}" Jan 27 06:47:40 crc kubenswrapper[4729]: break Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: done Jan 27 06:47:40 crc kubenswrapper[4729]: done Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: # Update /etc/hosts only if we get valid service IPs Jan 27 06:47:40 crc kubenswrapper[4729]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 27 06:47:40 crc kubenswrapper[4729]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 27 06:47:40 crc kubenswrapper[4729]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 27 06:47:40 crc kubenswrapper[4729]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 27 06:47:40 crc kubenswrapper[4729]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 27 06:47:40 crc kubenswrapper[4729]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 27 06:47:40 crc kubenswrapper[4729]: sleep 60 & wait Jan 27 06:47:40 crc kubenswrapper[4729]: continue Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: # Append resolver entries for services Jan 27 06:47:40 crc kubenswrapper[4729]: rc=0 Jan 27 06:47:40 crc kubenswrapper[4729]: for svc in "${!svc_ips[@]}"; do Jan 27 06:47:40 crc kubenswrapper[4729]: for ip in ${svc_ips[${svc}]}; do Jan 27 06:47:40 crc kubenswrapper[4729]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 27 06:47:40 crc kubenswrapper[4729]: done Jan 27 06:47:40 crc kubenswrapper[4729]: done Jan 27 06:47:40 crc kubenswrapper[4729]: if [[ $rc -ne 0 ]]; then Jan 27 06:47:40 crc kubenswrapper[4729]: sleep 60 & wait Jan 27 06:47:40 crc kubenswrapper[4729]: continue Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 27 06:47:40 crc kubenswrapper[4729]: # Replace /etc/hosts with our modified version if needed Jan 27 06:47:40 crc kubenswrapper[4729]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 27 06:47:40 crc kubenswrapper[4729]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: sleep 60 & wait Jan 27 06:47:40 crc kubenswrapper[4729]: unset svc_ips Jan 27 06:47:40 crc kubenswrapper[4729]: done Jan 27 06:47:40 crc kubenswrapper[4729]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpjkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-b6f5d_openshift-dns(9f924f11-f70f-436a-a7e5-fb7d0feeabc4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 06:47:40 crc kubenswrapper[4729]: > logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.090325 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-b6f5d" podUID="9f924f11-f70f-436a-a7e5-fb7d0feeabc4" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.108185 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.108221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.108230 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.108245 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.108257 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.146333 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-cmwl2"] Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.146941 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.149208 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.149459 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.150028 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.150374 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5x25t"] Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.150708 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-45zq7"] Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.151138 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.151429 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.159159 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.159219 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.159164 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.159470 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.159596 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.159881 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.159922 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.160115 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.160630 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.162946 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.174626 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.185214 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.194784 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.207869 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.210327 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.210372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.210394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.210415 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.210427 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.218942 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.230465 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.243085 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249104 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-system-cni-dir\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249146 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-hostroot\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249166 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-os-release\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249185 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-695bm\" (UniqueName: \"kubernetes.io/projected/526865eb-4ab7-486d-925d-6b4583d6b86f-kube-api-access-695bm\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249213 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-run-multus-certs\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249254 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-var-lib-cni-multus\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249318 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-daemon-config\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249380 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b8949c5-4022-49a3-af0d-2580921d3b18-cni-binary-copy\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249402 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249431 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4m7b\" (UniqueName: \"kubernetes.io/projected/15e81784-44b6-45c7-a893-4b38366a1b5e-kube-api-access-c4m7b\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249456 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-run-k8s-cni-cncf-io\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249475 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b8949c5-4022-49a3-af0d-2580921d3b18-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249502 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/526865eb-4ab7-486d-925d-6b4583d6b86f-mcd-auth-proxy-config\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249526 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-os-release\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249545 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-var-lib-kubelet\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249567 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-socket-dir-parent\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249634 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-cnibin\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249653 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-run-netns\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249676 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/526865eb-4ab7-486d-925d-6b4583d6b86f-proxy-tls\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249692 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-system-cni-dir\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249751 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj47b\" (UniqueName: \"kubernetes.io/projected/3b8949c5-4022-49a3-af0d-2580921d3b18-kube-api-access-nj47b\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249788 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-var-lib-cni-bin\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249826 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-conf-dir\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249844 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-cnibin\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249875 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/526865eb-4ab7-486d-925d-6b4583d6b86f-rootfs\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249896 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-cni-dir\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249917 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/15e81784-44b6-45c7-a893-4b38366a1b5e-cni-binary-copy\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.249934 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-etc-kubernetes\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.262613 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.280215 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.282939 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:13:03.636884218 +0000 UTC Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.295631 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.313184 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.313237 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.313276 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.313294 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.313309 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.313556 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.338243 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.350842 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-cnibin\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.350891 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-run-netns\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.350915 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/526865eb-4ab7-486d-925d-6b4583d6b86f-proxy-tls\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.350934 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-system-cni-dir\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.350953 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj47b\" (UniqueName: \"kubernetes.io/projected/3b8949c5-4022-49a3-af0d-2580921d3b18-kube-api-access-nj47b\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.350973 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-var-lib-cni-bin\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.350995 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-conf-dir\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351012 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-cnibin\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351062 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/526865eb-4ab7-486d-925d-6b4583d6b86f-rootfs\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351102 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-cni-dir\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351105 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-conf-dir\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351125 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-system-cni-dir\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351179 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-cnibin\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351132 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/526865eb-4ab7-486d-925d-6b4583d6b86f-rootfs\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351121 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/15e81784-44b6-45c7-a893-4b38366a1b5e-cni-binary-copy\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351093 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-var-lib-cni-bin\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351051 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-run-netns\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351231 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-etc-kubernetes\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351262 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-system-cni-dir\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351279 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-hostroot\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351297 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-os-release\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351310 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-etc-kubernetes\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351326 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-695bm\" (UniqueName: \"kubernetes.io/projected/526865eb-4ab7-486d-925d-6b4583d6b86f-kube-api-access-695bm\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351359 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-cni-dir\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351385 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-run-multus-certs\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351418 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-var-lib-cni-multus\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351436 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-daemon-config\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351459 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-os-release\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351460 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b8949c5-4022-49a3-af0d-2580921d3b18-cni-binary-copy\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351471 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-system-cni-dir\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351508 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351535 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4m7b\" (UniqueName: \"kubernetes.io/projected/15e81784-44b6-45c7-a893-4b38366a1b5e-kube-api-access-c4m7b\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351544 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-var-lib-cni-multus\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351553 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-run-k8s-cni-cncf-io\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351570 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b8949c5-4022-49a3-af0d-2580921d3b18-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351593 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/526865eb-4ab7-486d-925d-6b4583d6b86f-mcd-auth-proxy-config\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351610 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-os-release\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351630 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-var-lib-kubelet\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351656 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-socket-dir-parent\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351734 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-socket-dir-parent\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351573 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-run-multus-certs\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351823 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/15e81784-44b6-45c7-a893-4b38366a1b5e-cni-binary-copy\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.351418 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-hostroot\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.352107 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b8949c5-4022-49a3-af0d-2580921d3b18-cni-binary-copy\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.352148 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-run-k8s-cni-cncf-io\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.352200 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-os-release\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.352225 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-host-var-lib-kubelet\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.352315 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b8949c5-4022-49a3-af0d-2580921d3b18-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.352436 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/15e81784-44b6-45c7-a893-4b38366a1b5e-multus-daemon-config\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.352454 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b8949c5-4022-49a3-af0d-2580921d3b18-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.352566 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/526865eb-4ab7-486d-925d-6b4583d6b86f-mcd-auth-proxy-config\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.352621 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/15e81784-44b6-45c7-a893-4b38366a1b5e-cnibin\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.354896 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/526865eb-4ab7-486d-925d-6b4583d6b86f-proxy-tls\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.355796 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.366474 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.366994 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.367576 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-695bm\" (UniqueName: \"kubernetes.io/projected/526865eb-4ab7-486d-925d-6b4583d6b86f-kube-api-access-695bm\") pod \"machine-config-daemon-5x25t\" (UID: \"526865eb-4ab7-486d-925d-6b4583d6b86f\") " pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.368243 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.368919 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.368923 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4m7b\" (UniqueName: \"kubernetes.io/projected/15e81784-44b6-45c7-a893-4b38366a1b5e-kube-api-access-c4m7b\") pod \"multus-45zq7\" (UID: \"15e81784-44b6-45c7-a893-4b38366a1b5e\") " pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.369519 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.369902 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.370238 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj47b\" (UniqueName: \"kubernetes.io/projected/3b8949c5-4022-49a3-af0d-2580921d3b18-kube-api-access-nj47b\") pod \"multus-additional-cni-plugins-cmwl2\" (UID: \"3b8949c5-4022-49a3-af0d-2580921d3b18\") " pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.370574 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.371157 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.372060 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.372713 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.373639 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.374122 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.375128 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.375609 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.376118 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.376971 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.377537 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.378507 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.378863 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.379400 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.380763 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.381645 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.381670 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.382632 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.383248 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.384120 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.384794 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.385520 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.386310 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.386934 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.389502 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.390003 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.391197 4729 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.391254 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.391331 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.393539 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.394656 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.395117 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.397580 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.398146 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.398479 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.399020 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.399689 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.400399 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.400965 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.401613 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.403822 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.404457 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.405335 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.405874 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.406801 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.407547 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.408400 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.408632 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.408969 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.409466 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.410353 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.410931 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.412053 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.416687 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.416713 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.416723 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.416739 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.416751 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.428119 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.439655 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.451867 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.461015 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.468342 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-45zq7" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.477139 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj47b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-cmwl2_openshift-multus(3b8949c5-4022-49a3-af0d-2580921d3b18): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.477174 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.478812 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" podUID="3b8949c5-4022-49a3-af0d-2580921d3b18" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.482911 4729 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 06:47:40 crc kubenswrapper[4729]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 27 06:47:40 crc kubenswrapper[4729]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 27 06:47:40 crc kubenswrapper[4729]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4m7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-45zq7_openshift-multus(15e81784-44b6-45c7-a893-4b38366a1b5e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 06:47:40 crc kubenswrapper[4729]: > logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.484802 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-45zq7" podUID="15e81784-44b6-45c7-a893-4b38366a1b5e" Jan 27 06:47:40 crc kubenswrapper[4729]: W0127 06:47:40.487681 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod526865eb_4ab7_486d_925d_6b4583d6b86f.slice/crio-d695f69d24e5d6a08cd55a4b5c5da547a0596ddc083f118ea3c2feb6035e61cb WatchSource:0}: Error finding container d695f69d24e5d6a08cd55a4b5c5da547a0596ddc083f118ea3c2feb6035e61cb: Status 404 returned error can't find the container with id d695f69d24e5d6a08cd55a4b5c5da547a0596ddc083f118ea3c2feb6035e61cb Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.490656 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-695bm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5x25t_openshift-machine-config-operator(526865eb-4ab7-486d-925d-6b4583d6b86f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.492659 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-695bm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5x25t_openshift-machine-config-operator(526865eb-4ab7-486d-925d-6b4583d6b86f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.493931 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.515973 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-95wgz"] Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.516777 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.519213 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4729]: W0127 06:47:40.519242 4729 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.519257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.519272 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.519275 4729 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.519294 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.519307 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.520587 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 06:47:40 crc kubenswrapper[4729]: W0127 06:47:40.520623 4729 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.520663 4729 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.521502 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.522118 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.522252 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 06:47:40 crc kubenswrapper[4729]: W0127 06:47:40.523322 4729 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.523350 4729 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.534544 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.546130 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerStarted","Data":"d695f69d24e5d6a08cd55a4b5c5da547a0596ddc083f118ea3c2feb6035e61cb"} Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.547819 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-695bm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5x25t_openshift-machine-config-operator(526865eb-4ab7-486d-925d-6b4583d6b86f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.549797 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"47a0f775ae6b802192a2e863cf7a79c86ff13daa8d9606a21398575c9937b5cf"} Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.551211 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-695bm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5x25t_openshift-machine-config-operator(526865eb-4ab7-486d-925d-6b4583d6b86f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.552652 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-openvswitch\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.552680 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.552950 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-ovn\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553010 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-kubelet\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553043 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-etc-openvswitch\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553097 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-netd\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553133 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-netns\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553165 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-log-socket\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553205 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553349 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovn-node-metrics-cert\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.553372 4729 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 06:47:40 crc kubenswrapper[4729]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 27 06:47:40 crc kubenswrapper[4729]: if [[ -f "/env/_master" ]]; then Jan 27 06:47:40 crc kubenswrapper[4729]: set -o allexport Jan 27 06:47:40 crc kubenswrapper[4729]: source "/env/_master" Jan 27 06:47:40 crc kubenswrapper[4729]: set +o allexport Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 27 06:47:40 crc kubenswrapper[4729]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 27 06:47:40 crc kubenswrapper[4729]: ho_enable="--enable-hybrid-overlay" Jan 27 06:47:40 crc kubenswrapper[4729]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 27 06:47:40 crc kubenswrapper[4729]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 27 06:47:40 crc kubenswrapper[4729]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 27 06:47:40 crc kubenswrapper[4729]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 27 06:47:40 crc kubenswrapper[4729]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 27 06:47:40 crc kubenswrapper[4729]: --webhook-host=127.0.0.1 \ Jan 27 06:47:40 crc kubenswrapper[4729]: --webhook-port=9743 \ Jan 27 06:47:40 crc kubenswrapper[4729]: ${ho_enable} \ Jan 27 06:47:40 crc kubenswrapper[4729]: --enable-interconnect \ Jan 27 06:47:40 crc kubenswrapper[4729]: --disable-approver \ Jan 27 06:47:40 crc kubenswrapper[4729]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 27 06:47:40 crc kubenswrapper[4729]: --wait-for-kubernetes-api=200s \ Jan 27 06:47:40 crc kubenswrapper[4729]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 27 06:47:40 crc kubenswrapper[4729]: --loglevel="${LOGLEVEL}" Jan 27 06:47:40 crc kubenswrapper[4729]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 06:47:40 crc kubenswrapper[4729]: > logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553398 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-env-overrides\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553446 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-var-lib-openvswitch\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553473 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-script-lib\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553554 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-systemd\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553587 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-slash\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553671 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-ovn-kubernetes\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553805 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-bin\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553839 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"348b5dc62f1804d855af4eb2731bf51025bada673261a34382c6ba91ece8ede8"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553851 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-config\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553916 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-node-log\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.553951 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgfn5\" (UniqueName: \"kubernetes.io/projected/f4dbf50d-949f-4203-873a-7ced1d5a5015-kube-api-access-wgfn5\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.554677 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-systemd-units\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.554678 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-b6f5d" event={"ID":"9f924f11-f70f-436a-a7e5-fb7d0feeabc4","Type":"ContainerStarted","Data":"df3492bbba334b360ca1c8b373b2936bd311c18ee8de1324511950934b3968df"} Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.555499 4729 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 06:47:40 crc kubenswrapper[4729]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 27 06:47:40 crc kubenswrapper[4729]: if [[ -f "/env/_master" ]]; then Jan 27 06:47:40 crc kubenswrapper[4729]: set -o allexport Jan 27 06:47:40 crc kubenswrapper[4729]: source "/env/_master" Jan 27 06:47:40 crc kubenswrapper[4729]: set +o allexport Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 27 06:47:40 crc kubenswrapper[4729]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 27 06:47:40 crc kubenswrapper[4729]: --disable-webhook \ Jan 27 06:47:40 crc kubenswrapper[4729]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 27 06:47:40 crc kubenswrapper[4729]: --loglevel="${LOGLEVEL}" Jan 27 06:47:40 crc kubenswrapper[4729]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 06:47:40 crc kubenswrapper[4729]: > logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.555674 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.557580 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c66f44c647caff574b5bbf5f424cada7f1eeac815bc6ad58f6a02793c40a68f7"} Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.557653 4729 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 06:47:40 crc kubenswrapper[4729]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Jan 27 06:47:40 crc kubenswrapper[4729]: set -uo pipefail Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 27 06:47:40 crc kubenswrapper[4729]: HOSTS_FILE="/etc/hosts" Jan 27 06:47:40 crc kubenswrapper[4729]: TEMP_FILE="/etc/hosts.tmp" Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: # Make a temporary file with the old hosts file's attributes. Jan 27 06:47:40 crc kubenswrapper[4729]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 27 06:47:40 crc kubenswrapper[4729]: echo "Failed to preserve hosts file. Exiting." Jan 27 06:47:40 crc kubenswrapper[4729]: exit 1 Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: while true; do Jan 27 06:47:40 crc kubenswrapper[4729]: declare -A svc_ips Jan 27 06:47:40 crc kubenswrapper[4729]: for svc in "${services[@]}"; do Jan 27 06:47:40 crc kubenswrapper[4729]: # Fetch service IP from cluster dns if present. We make several tries Jan 27 06:47:40 crc kubenswrapper[4729]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 27 06:47:40 crc kubenswrapper[4729]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 27 06:47:40 crc kubenswrapper[4729]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 27 06:47:40 crc kubenswrapper[4729]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 06:47:40 crc kubenswrapper[4729]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 06:47:40 crc kubenswrapper[4729]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 06:47:40 crc kubenswrapper[4729]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 27 06:47:40 crc kubenswrapper[4729]: for i in ${!cmds[*]} Jan 27 06:47:40 crc kubenswrapper[4729]: do Jan 27 06:47:40 crc kubenswrapper[4729]: ips=($(eval "${cmds[i]}")) Jan 27 06:47:40 crc kubenswrapper[4729]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 27 06:47:40 crc kubenswrapper[4729]: svc_ips["${svc}"]="${ips[@]}" Jan 27 06:47:40 crc kubenswrapper[4729]: break Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: done Jan 27 06:47:40 crc kubenswrapper[4729]: done Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: # Update /etc/hosts only if we get valid service IPs Jan 27 06:47:40 crc kubenswrapper[4729]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 27 06:47:40 crc kubenswrapper[4729]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 27 06:47:40 crc kubenswrapper[4729]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 27 06:47:40 crc kubenswrapper[4729]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 27 06:47:40 crc kubenswrapper[4729]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 27 06:47:40 crc kubenswrapper[4729]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 27 06:47:40 crc kubenswrapper[4729]: sleep 60 & wait Jan 27 06:47:40 crc kubenswrapper[4729]: continue Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: # Append resolver entries for services Jan 27 06:47:40 crc kubenswrapper[4729]: rc=0 Jan 27 06:47:40 crc kubenswrapper[4729]: for svc in "${!svc_ips[@]}"; do Jan 27 06:47:40 crc kubenswrapper[4729]: for ip in ${svc_ips[${svc}]}; do Jan 27 06:47:40 crc kubenswrapper[4729]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 27 06:47:40 crc kubenswrapper[4729]: done Jan 27 06:47:40 crc kubenswrapper[4729]: done Jan 27 06:47:40 crc kubenswrapper[4729]: if [[ $rc -ne 0 ]]; then Jan 27 06:47:40 crc kubenswrapper[4729]: sleep 60 & wait Jan 27 06:47:40 crc kubenswrapper[4729]: continue Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: Jan 27 06:47:40 crc kubenswrapper[4729]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 27 06:47:40 crc kubenswrapper[4729]: # Replace /etc/hosts with our modified version if needed Jan 27 06:47:40 crc kubenswrapper[4729]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 27 06:47:40 crc kubenswrapper[4729]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: sleep 60 & wait Jan 27 06:47:40 crc kubenswrapper[4729]: unset svc_ips Jan 27 06:47:40 crc kubenswrapper[4729]: done Jan 27 06:47:40 crc kubenswrapper[4729]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpjkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-b6f5d_openshift-dns(9f924f11-f70f-436a-a7e5-fb7d0feeabc4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 06:47:40 crc kubenswrapper[4729]: > logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.557741 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.558740 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-b6f5d" podUID="9f924f11-f70f-436a-a7e5-fb7d0feeabc4" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.558905 4729 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 06:47:40 crc kubenswrapper[4729]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 27 06:47:40 crc kubenswrapper[4729]: set -o allexport Jan 27 06:47:40 crc kubenswrapper[4729]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 27 06:47:40 crc kubenswrapper[4729]: source /etc/kubernetes/apiserver-url.env Jan 27 06:47:40 crc kubenswrapper[4729]: else Jan 27 06:47:40 crc kubenswrapper[4729]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 27 06:47:40 crc kubenswrapper[4729]: exit 1 Jan 27 06:47:40 crc kubenswrapper[4729]: fi Jan 27 06:47:40 crc kubenswrapper[4729]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 27 06:47:40 crc kubenswrapper[4729]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 06:47:40 crc kubenswrapper[4729]: > logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.559980 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.560492 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.560867 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45zq7" event={"ID":"15e81784-44b6-45c7-a893-4b38366a1b5e","Type":"ContainerStarted","Data":"f11accaaf2be35533f889c6934647bdfac8e07a1a42d072eec01bc7bdf13dce9"} Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.561719 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.561961 4729 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 06:47:40 crc kubenswrapper[4729]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 27 06:47:40 crc kubenswrapper[4729]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 27 06:47:40 crc kubenswrapper[4729]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4m7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-45zq7_openshift-multus(15e81784-44b6-45c7-a893-4b38366a1b5e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 06:47:40 crc kubenswrapper[4729]: > logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.562915 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" event={"ID":"3b8949c5-4022-49a3-af0d-2580921d3b18","Type":"ContainerStarted","Data":"e1c22514a153adc14a40f7251f2be6fb79ae7cd761ab9f9653f67026b2d791b0"} Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.563212 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-45zq7" podUID="15e81784-44b6-45c7-a893-4b38366a1b5e" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.565143 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj47b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-cmwl2_openshift-multus(3b8949c5-4022-49a3-af0d-2580921d3b18): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.566230 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" podUID="3b8949c5-4022-49a3-af0d-2580921d3b18" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.566405 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.575544 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.583986 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.592634 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.603116 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.611464 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.621838 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.621868 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.621876 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.621894 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.621904 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.622632 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.642926 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656250 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-netns\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656289 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-log-socket\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656327 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656440 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovn-node-metrics-cert\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656438 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-netns\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656483 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-env-overrides\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656506 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-var-lib-openvswitch\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656528 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-script-lib\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656572 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-systemd\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656591 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-slash\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656610 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-ovn-kubernetes\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656664 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-bin\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656740 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-config\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656759 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-systemd-units\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656779 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-node-log\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656797 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgfn5\" (UniqueName: \"kubernetes.io/projected/f4dbf50d-949f-4203-873a-7ced1d5a5015-kube-api-access-wgfn5\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656817 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-ovn\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656847 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-openvswitch\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656873 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-kubelet\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.656894 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-etc-openvswitch\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657056 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-log-socket\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657117 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-netd\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657149 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-netd\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657400 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-systemd\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657436 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-slash\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657442 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-bin\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657457 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-openvswitch\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657464 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-ovn\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657478 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-etc-openvswitch\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657527 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-systemd-units\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657559 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-ovn-kubernetes\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657590 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-node-log\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657726 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-kubelet\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.657794 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-config\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.658057 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-var-lib-openvswitch\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.658254 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-script-lib\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.658368 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.658612 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.668180 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.675719 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgfn5\" (UniqueName: \"kubernetes.io/projected/f4dbf50d-949f-4203-873a-7ced1d5a5015-kube-api-access-wgfn5\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.678841 4729 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.693103 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.705777 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.715015 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.725043 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.725108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.725120 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.725137 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.725147 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.728824 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.738958 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.758156 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.758328 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:47:42.758303112 +0000 UTC m=+27.825424375 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.767392 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.805912 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.827574 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.827613 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.827624 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.827639 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.827651 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.847051 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.859769 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.859816 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.859977 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.859993 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.860066 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:42.860041766 +0000 UTC m=+27.927163069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.860126 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:42.860115799 +0000 UTC m=+27.927237062 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.886342 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.926830 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.929497 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.929599 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.929615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.929635 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.929647 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.960941 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.961004 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.961208 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.961231 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.961235 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.961317 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.961247 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.961336 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.961398 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:42.961376317 +0000 UTC m=+28.028497590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:40 crc kubenswrapper[4729]: E0127 06:47:40.961420 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:42.961411118 +0000 UTC m=+28.028532401 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:40 crc kubenswrapper[4729]: I0127 06:47:40.970582 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.007846 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.032879 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.032942 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.032955 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.032978 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.032994 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.056944 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.096549 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.135885 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.135941 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.135956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.135979 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.135993 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.238659 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.238697 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.238706 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.238724 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.238734 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.283726 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:40:58.640441321 +0000 UTC Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.341487 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.341539 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.341551 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.341574 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.341587 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.362224 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:41 crc kubenswrapper[4729]: E0127 06:47:41.362381 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.362460 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:41 crc kubenswrapper[4729]: E0127 06:47:41.362514 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.362560 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:41 crc kubenswrapper[4729]: E0127 06:47:41.362614 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.445063 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.445135 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.445147 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.445168 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.445183 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.548718 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.548777 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.548792 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.548812 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.548827 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.600969 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-czw8g"] Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.601473 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.603824 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.604108 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.604655 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.609705 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.611423 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.618455 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-env-overrides\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.618988 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.628196 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.640798 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.651672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.651717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.651726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.651744 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.651754 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.655186 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.658324 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.660990 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovn-node-metrics-cert\") pod \"ovnkube-node-95wgz\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.668043 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f0a2a1b-0118-4509-b664-8bf0c6b22cf6-host\") pod \"node-ca-czw8g\" (UID: \"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\") " pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.668123 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bczgv\" (UniqueName: \"kubernetes.io/projected/0f0a2a1b-0118-4509-b664-8bf0c6b22cf6-kube-api-access-bczgv\") pod \"node-ca-czw8g\" (UID: \"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\") " pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.668276 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f0a2a1b-0118-4509-b664-8bf0c6b22cf6-serviceca\") pod \"node-ca-czw8g\" (UID: \"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\") " pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.673045 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.683242 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.698408 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.709982 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.719765 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.728052 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.737644 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.746227 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.754130 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.754176 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.754203 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.754222 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.754233 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.755754 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.764019 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.769441 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bczgv\" (UniqueName: \"kubernetes.io/projected/0f0a2a1b-0118-4509-b664-8bf0c6b22cf6-kube-api-access-bczgv\") pod \"node-ca-czw8g\" (UID: \"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\") " pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.769491 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f0a2a1b-0118-4509-b664-8bf0c6b22cf6-serviceca\") pod \"node-ca-czw8g\" (UID: \"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\") " pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.769549 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f0a2a1b-0118-4509-b664-8bf0c6b22cf6-host\") pod \"node-ca-czw8g\" (UID: \"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\") " pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.769594 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f0a2a1b-0118-4509-b664-8bf0c6b22cf6-host\") pod \"node-ca-czw8g\" (UID: \"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\") " pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.770684 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f0a2a1b-0118-4509-b664-8bf0c6b22cf6-serviceca\") pod \"node-ca-czw8g\" (UID: \"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\") " pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.794196 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bczgv\" (UniqueName: \"kubernetes.io/projected/0f0a2a1b-0118-4509-b664-8bf0c6b22cf6-kube-api-access-bczgv\") pod \"node-ca-czw8g\" (UID: \"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\") " pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.857776 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.857825 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.857835 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.857855 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.857866 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.916209 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-czw8g" Jan 27 06:47:41 crc kubenswrapper[4729]: W0127 06:47:41.928591 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f0a2a1b_0118_4509_b664_8bf0c6b22cf6.slice/crio-a044f4248a1c6e74ccafa315c967ff0488a9268f25cdd2ba4e509f65909dcc46 WatchSource:0}: Error finding container a044f4248a1c6e74ccafa315c967ff0488a9268f25cdd2ba4e509f65909dcc46: Status 404 returned error can't find the container with id a044f4248a1c6e74ccafa315c967ff0488a9268f25cdd2ba4e509f65909dcc46 Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.963720 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.963764 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.963774 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.963788 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4729]: I0127 06:47:41.963801 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.070093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.070146 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.070159 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.070181 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.070194 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.107861 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.112177 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:42 crc kubenswrapper[4729]: W0127 06:47:42.122608 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4dbf50d_949f_4203_873a_7ced1d5a5015.slice/crio-67fa9c5098bd53dc85612cba4b388aa52b6b6572fc25b5fbeca9e7f8a6adc162 WatchSource:0}: Error finding container 67fa9c5098bd53dc85612cba4b388aa52b6b6572fc25b5fbeca9e7f8a6adc162: Status 404 returned error can't find the container with id 67fa9c5098bd53dc85612cba4b388aa52b6b6572fc25b5fbeca9e7f8a6adc162 Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.173409 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.173439 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.173449 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.173490 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.173501 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.275494 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.275526 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.275534 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.275550 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.275559 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.284049 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:46:37.233957341 +0000 UTC Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.377772 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.377809 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.377820 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.377838 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.377848 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.480626 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.480666 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.480677 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.480696 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.480709 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.575929 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-czw8g" event={"ID":"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6","Type":"ContainerStarted","Data":"85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.575996 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-czw8g" event={"ID":"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6","Type":"ContainerStarted","Data":"a044f4248a1c6e74ccafa315c967ff0488a9268f25cdd2ba4e509f65909dcc46"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.580384 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7" exitCode=0 Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.580438 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.580461 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"67fa9c5098bd53dc85612cba4b388aa52b6b6572fc25b5fbeca9e7f8a6adc162"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.583280 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.583316 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.583329 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.583345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.583357 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.592427 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.602117 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.615381 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.630476 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.645634 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.662254 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.678716 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.686428 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.686471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.686481 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.686501 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.686514 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.698241 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.725355 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.736949 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.756544 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.770261 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.778846 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.778984 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:47:46.778958691 +0000 UTC m=+31.846079954 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.783296 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.788756 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.788808 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.788821 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.788840 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.788852 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.798373 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.810110 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.833134 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.846520 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.863438 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.874974 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.879898 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.879952 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.880135 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.880213 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:46.88019343 +0000 UTC m=+31.947314703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.880316 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.880437 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:46.880416587 +0000 UTC m=+31.947537850 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.891179 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.891492 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.891540 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.891550 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.891568 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.891578 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.902166 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.918240 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.935305 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.945353 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.958313 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.981259 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.981311 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.981464 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.981491 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.981507 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.981556 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:46.981539962 +0000 UTC m=+32.048661235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.981614 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.981631 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.981640 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:42 crc kubenswrapper[4729]: E0127 06:47:42.981671 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:46.981661335 +0000 UTC m=+32.048782608 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.983850 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.993559 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.993588 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.993597 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.993616 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.993627 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4729]: I0127 06:47:42.995727 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.007886 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.095721 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.095763 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.095773 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.095789 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.095800 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.198225 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.198355 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.198419 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.198517 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.198620 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.284317 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:56:55.913738319 +0000 UTC Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.301183 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.301354 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.301433 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.301509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.301572 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.362122 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.362262 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:43 crc kubenswrapper[4729]: E0127 06:47:43.362482 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.362579 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:43 crc kubenswrapper[4729]: E0127 06:47:43.362701 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:43 crc kubenswrapper[4729]: E0127 06:47:43.363039 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.404100 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.404149 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.404161 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.404179 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.404190 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.506851 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.506895 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.506909 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.506930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.506944 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.586418 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.586478 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.586498 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.586518 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.586540 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.586557 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.613320 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.613389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.613404 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.613433 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.613455 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.716554 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.716811 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.716903 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.716998 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.717107 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.820200 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.820538 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.820739 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.820943 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.821187 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.924507 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.924539 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.924549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.924566 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4729]: I0127 06:47:43.924577 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.027407 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.027462 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.027477 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.027497 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.027509 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.130272 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.130322 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.130333 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.130353 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.130367 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.233256 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.233303 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.233316 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.233335 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.233347 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.285145 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:19:05.647520343 +0000 UTC Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.335857 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.335899 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.335910 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.335927 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.335938 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.438836 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.438873 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.438884 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.438902 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.438912 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.541520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.541577 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.541594 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.541620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.541638 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.644515 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.644579 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.644596 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.644622 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.644641 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.747387 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.747419 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.747429 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.747445 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.747454 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.850654 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.850731 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.850749 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.850775 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.850800 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.953361 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.953433 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.953456 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.953489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4729]: I0127 06:47:44.953512 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.056121 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.056165 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.056174 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.056189 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.056200 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.159488 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.159554 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.159576 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.159605 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.159626 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.262325 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.262398 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.262414 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.262441 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.262464 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.286413 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 02:36:57.110783937 +0000 UTC Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.362377 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:45 crc kubenswrapper[4729]: E0127 06:47:45.362801 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.362398 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:45 crc kubenswrapper[4729]: E0127 06:47:45.363413 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.362461 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:45 crc kubenswrapper[4729]: E0127 06:47:45.363572 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.365402 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.365448 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.365471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.365502 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.365527 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.468787 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.469085 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.469194 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.469276 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.469348 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.572239 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.572293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.572307 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.572326 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.572336 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.601920 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.666777 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.667803 4729 scope.go:117] "RemoveContainer" containerID="65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066" Jan 27 06:47:45 crc kubenswrapper[4729]: E0127 06:47:45.667999 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.675019 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.675108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.675122 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.675144 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.675158 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.778190 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.778252 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.778269 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.778296 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.778315 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.885657 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.885699 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.885710 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.885727 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.885740 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.889239 4729 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.988644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.988679 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.988686 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.988704 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4729]: I0127 06:47:45.988714 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.091791 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.091862 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.091884 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.091914 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.091937 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.197772 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.197830 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.197844 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.197860 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.197871 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.287216 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:26:32.183431263 +0000 UTC Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.300320 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.300356 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.300374 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.300391 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.300401 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.373452 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.382518 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.390977 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.400971 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.403394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.403450 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.403469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.403497 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.403555 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.419762 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.438298 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.446820 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.455797 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.462946 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.478338 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.491929 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.501885 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.505686 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.505713 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.505724 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.505765 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.505777 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.512853 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.525834 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.608648 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.608693 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.608704 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.608722 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.608735 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.711724 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.711784 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.711803 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.711829 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.711860 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.814256 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.814306 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.814320 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.814341 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.814358 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.847004 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:46 crc kubenswrapper[4729]: E0127 06:47:46.847165 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:47:54.847140332 +0000 UTC m=+39.914261595 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.917653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.917699 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.917716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.917741 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.917758 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.947946 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:46 crc kubenswrapper[4729]: I0127 06:47:46.948024 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:46 crc kubenswrapper[4729]: E0127 06:47:46.948117 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:46 crc kubenswrapper[4729]: E0127 06:47:46.948229 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:54.948199214 +0000 UTC m=+40.015320517 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:46 crc kubenswrapper[4729]: E0127 06:47:46.948324 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:46 crc kubenswrapper[4729]: E0127 06:47:46.948480 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:54.948451082 +0000 UTC m=+40.015572495 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.020811 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.020873 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.020892 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.020921 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.020940 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.048862 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.048921 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.049109 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.049134 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.049150 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.049158 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.049220 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:55.049205046 +0000 UTC m=+40.116326309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.049220 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.049243 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.049268 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:55.049261717 +0000 UTC m=+40.116382980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.123611 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.123655 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.123668 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.123686 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.123697 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.226737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.226779 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.226790 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.226815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.226828 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.287799 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:36:06.623845444 +0000 UTC Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.329805 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.329863 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.329879 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.329906 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.329924 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.362412 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.362515 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.362528 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.362684 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.362749 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:47 crc kubenswrapper[4729]: E0127 06:47:47.362825 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.433991 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.434040 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.434052 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.434085 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.434113 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.537923 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.538150 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.538280 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.538351 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.538437 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.615692 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.618182 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.625433 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.641671 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.641948 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.642002 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.642018 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.642043 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.642060 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.646137 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.654208 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.669809 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.680020 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.691705 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.702777 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.712490 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.723610 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.741493 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.744358 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.744414 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.744425 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.744444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.744457 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.755643 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.766654 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.775499 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.787305 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.796391 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.809100 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.822284 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.837094 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.850388 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.850427 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.850438 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.850459 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.850427 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.850470 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.861578 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.873710 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.882853 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.890979 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.902276 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.914119 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.925526 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.938213 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.953737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.953794 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.953808 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.953849 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.953866 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4729]: I0127 06:47:47.961054 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.057143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.057209 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.057230 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.057250 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.057264 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.160793 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.160878 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.160901 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.160939 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.160961 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.263783 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.263838 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.263849 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.263868 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.263883 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.288955 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:17:46.951276697 +0000 UTC Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.367775 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.367825 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.367835 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.367854 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.367865 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.471673 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.471725 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.471737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.471764 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.471777 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.574743 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.574788 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.574811 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.574828 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.574841 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.618793 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.619520 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.647375 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.671585 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.679323 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.679371 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.679381 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.679403 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.679415 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.690584 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.704521 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.717549 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.730704 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.744698 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.756536 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.769359 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.782022 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.782066 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.782095 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.782113 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.782124 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.783636 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.803350 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.816871 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.833209 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.843273 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.857088 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.884272 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.884332 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.884348 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.884372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.884392 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.987162 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.987228 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.987240 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.987259 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4729]: I0127 06:47:48.987270 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.090751 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.090810 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.090824 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.090843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.090853 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.192919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.192966 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.192975 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.192993 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.193097 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.290158 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:48:33.17432744 +0000 UTC Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.295560 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.295628 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.295648 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.295677 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.295696 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.361947 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.362032 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.361975 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:49 crc kubenswrapper[4729]: E0127 06:47:49.362133 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:49 crc kubenswrapper[4729]: E0127 06:47:49.362291 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:49 crc kubenswrapper[4729]: E0127 06:47:49.362553 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.399103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.399177 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.399198 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.399230 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.399253 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.502178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.502226 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.502237 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.502255 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.502265 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.604746 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.604815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.604831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.604856 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.604872 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.621221 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.708214 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.708284 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.708309 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.708342 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.708365 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.810607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.810655 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.810668 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.810687 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.810699 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.833925 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.833974 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.833985 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.834006 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.834017 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: E0127 06:47:49.844999 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.849768 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.849838 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.849859 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.849891 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.849913 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: E0127 06:47:49.874039 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.881190 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.881271 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.881290 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.881318 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.881337 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: E0127 06:47:49.893351 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.897953 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.898008 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.898025 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.898050 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.898092 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: E0127 06:47:49.938537 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.946497 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.946552 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.946565 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.946587 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.946602 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4729]: E0127 06:47:49.959828 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:49 crc kubenswrapper[4729]: E0127 06:47:49.959996 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.961884 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.961920 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.961935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.961957 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4729]: I0127 06:47:49.961971 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.065241 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.065329 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.065353 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.065387 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.065410 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.168596 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.168667 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.168684 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.168716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.168741 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.272624 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.272716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.272730 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.272752 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.272766 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.290362 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 19:57:57.717598733 +0000 UTC Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.375435 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.375507 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.375524 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.375552 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.375584 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.479331 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.479381 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.479390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.479406 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.479417 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.582825 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.582896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.582914 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.582943 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.582962 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.625217 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.686312 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.686372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.686388 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.686416 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.686435 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.789531 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.789637 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.789654 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.789675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.789689 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.893026 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.893086 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.893101 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.893119 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.893132 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.996256 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.996306 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.996321 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.996341 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4729]: I0127 06:47:50.996355 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.098980 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.099041 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.099051 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.099086 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.099122 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.202151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.202198 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.202210 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.202228 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.202242 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.290719 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:43:47.57728063 +0000 UTC Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.305923 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.306005 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.306021 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.306047 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.306092 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.362306 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.362388 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.362516 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:51 crc kubenswrapper[4729]: E0127 06:47:51.362859 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:51 crc kubenswrapper[4729]: E0127 06:47:51.362716 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:51 crc kubenswrapper[4729]: E0127 06:47:51.362490 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.412896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.412948 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.412959 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.412978 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.412990 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.516173 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.516330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.516360 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.516451 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.516533 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.619632 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.619675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.619688 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.619708 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.619723 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.723500 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.723558 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.723576 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.723607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.723660 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.826957 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.827001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.827017 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.827041 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.827095 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.930827 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.930901 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.930923 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.930957 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4729]: I0127 06:47:51.930980 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.034067 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.034165 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.034182 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.034211 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.034230 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.138807 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.138871 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.138896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.138929 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.138957 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.242673 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.242747 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.242791 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.242831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.242858 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.291018 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:23:58.379469347 +0000 UTC Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.346258 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.346323 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.346339 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.346366 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.346387 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.449912 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.449972 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.449986 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.450012 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.450028 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.537025 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x"] Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.537634 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.540171 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.540568 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.547739 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.552949 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.552999 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.553008 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.553030 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.553042 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.565121 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.579467 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.591090 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.600938 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.612116 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.612181 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.612206 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk4lt\" (UniqueName: \"kubernetes.io/projected/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-kube-api-access-tk4lt\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.612257 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.613945 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.623774 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.632593 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.634510 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.643861 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.653515 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.654852 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.654890 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.654907 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.654934 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.654953 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.669788 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.680427 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.692167 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.704871 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.712787 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.712988 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.713359 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.713670 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.713768 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk4lt\" (UniqueName: \"kubernetes.io/projected/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-kube-api-access-tk4lt\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.714775 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.728481 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.728577 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.742694 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.742907 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk4lt\" (UniqueName: \"kubernetes.io/projected/756bbc18-5ad0-4bbe-a612-30720a9f5fe3-kube-api-access-tk4lt\") pod \"ovnkube-control-plane-749d76644c-gjp6x\" (UID: \"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.757066 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.757878 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.757913 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.757922 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.757940 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.757950 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.780815 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.796631 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.810242 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.821235 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.830046 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.845705 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.855440 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.858031 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.860379 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.860469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.860483 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.860501 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.860513 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.881677 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.895412 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.910056 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.925477 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.943404 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.961758 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.970213 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.970277 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.970289 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.970310 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4729]: I0127 06:47:52.970323 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.071879 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.071921 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.071934 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.071954 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.071967 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.174388 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.174432 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.174442 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.174466 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.174476 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.277409 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.277437 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.277445 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.277460 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.277484 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.292217 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:56:49.709647725 +0000 UTC Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.361940 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.362353 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.362373 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:53 crc kubenswrapper[4729]: E0127 06:47:53.362650 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:53 crc kubenswrapper[4729]: E0127 06:47:53.362920 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:53 crc kubenswrapper[4729]: E0127 06:47:53.362975 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.380065 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.380122 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.380136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.380163 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.380175 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.484144 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.515331 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.515814 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.515837 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.515848 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.618518 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.618554 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.618563 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.618588 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.618602 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.638197 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerStarted","Data":"94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.638551 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerStarted","Data":"3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.640983 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" event={"ID":"756bbc18-5ad0-4bbe-a612-30720a9f5fe3","Type":"ContainerStarted","Data":"19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.641040 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" event={"ID":"756bbc18-5ad0-4bbe-a612-30720a9f5fe3","Type":"ContainerStarted","Data":"6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.641052 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" event={"ID":"756bbc18-5ad0-4bbe-a612-30720a9f5fe3","Type":"ContainerStarted","Data":"bb9515f1ee1f11d42ae956a6dcc579616648c492b801d208398752a526759e45"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.643191 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-b6f5d" event={"ID":"9f924f11-f70f-436a-a7e5-fb7d0feeabc4","Type":"ContainerStarted","Data":"8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.645033 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" event={"ID":"3b8949c5-4022-49a3-af0d-2580921d3b18","Type":"ContainerStarted","Data":"61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.655242 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xqs5z"] Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.655819 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:53 crc kubenswrapper[4729]: E0127 06:47:53.655886 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.658378 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.670310 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.680267 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.690416 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.701576 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.711980 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.720623 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.722296 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.722418 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.722505 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.722586 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.722662 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.730592 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fg22\" (UniqueName: \"kubernetes.io/projected/2c156b30-d262-4fdc-a70b-eb1703422f01-kube-api-access-4fg22\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.730824 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.732957 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.744415 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.755316 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.764536 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.775684 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.784297 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.793518 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.800901 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.807143 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.818725 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.824831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.824872 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.824882 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.824903 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.824913 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.831262 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.831704 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fg22\" (UniqueName: \"kubernetes.io/projected/2c156b30-d262-4fdc-a70b-eb1703422f01-kube-api-access-4fg22\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.831831 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:53 crc kubenswrapper[4729]: E0127 06:47:53.832033 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:53 crc kubenswrapper[4729]: E0127 06:47:53.832214 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs podName:2c156b30-d262-4fdc-a70b-eb1703422f01 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:54.332188855 +0000 UTC m=+39.399310138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs") pod "network-metrics-daemon-xqs5z" (UID: "2c156b30-d262-4fdc-a70b-eb1703422f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.840879 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.851029 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.858661 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.870237 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.878674 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fg22\" (UniqueName: \"kubernetes.io/projected/2c156b30-d262-4fdc-a70b-eb1703422f01-kube-api-access-4fg22\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.880823 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.894966 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.905584 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.916233 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.926736 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.926778 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.926790 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.926810 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.926823 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.928097 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.941824 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.973119 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:53 crc kubenswrapper[4729]: I0127 06:47:53.992910 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.012541 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.039792 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.039860 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.039871 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.039888 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.039904 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.155863 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.155912 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.155923 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.155945 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.155955 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.258915 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.258996 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.259014 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.259045 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.259096 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.293952 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:58:26.574423604 +0000 UTC Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.341857 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:54 crc kubenswrapper[4729]: E0127 06:47:54.342101 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:54 crc kubenswrapper[4729]: E0127 06:47:54.342209 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs podName:2c156b30-d262-4fdc-a70b-eb1703422f01 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:55.342177804 +0000 UTC m=+40.409299107 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs") pod "network-metrics-daemon-xqs5z" (UID: "2c156b30-d262-4fdc-a70b-eb1703422f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.363224 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.363285 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.363306 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.363334 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.363353 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.469097 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.469148 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.469158 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.469176 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.469191 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.572858 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.573328 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.573350 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.573380 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.573401 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.664264 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b8949c5-4022-49a3-af0d-2580921d3b18" containerID="61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7" exitCode=0 Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.664373 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" event={"ID":"3b8949c5-4022-49a3-af0d-2580921d3b18","Type":"ContainerDied","Data":"61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7"} Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.678654 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.681908 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.681958 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.682001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.682026 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.682044 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.692448 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.708678 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.719840 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.755314 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.771894 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.784480 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.784732 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.784791 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.784803 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.784844 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.784856 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.800022 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.817480 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.832122 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.844125 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.863036 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.877060 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.888259 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.888322 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.888337 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.888354 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.888712 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.889802 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.899135 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.908839 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.948197 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:54 crc kubenswrapper[4729]: E0127 06:47:54.948351 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:10.948323334 +0000 UTC m=+56.015444597 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.948488 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.948534 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:54 crc kubenswrapper[4729]: E0127 06:47:54.948627 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:54 crc kubenswrapper[4729]: E0127 06:47:54.948678 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:10.948670455 +0000 UTC m=+56.015791718 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:54 crc kubenswrapper[4729]: E0127 06:47:54.948774 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:54 crc kubenswrapper[4729]: E0127 06:47:54.948870 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:10.9488426 +0000 UTC m=+56.015963903 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.991714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.991767 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.991784 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.991808 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4729]: I0127 06:47:54.991829 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.049363 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.049430 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.049667 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.049727 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.049745 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.049832 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:11.04980687 +0000 UTC m=+56.116928323 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.051633 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.051689 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.051713 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.051828 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:11.051795242 +0000 UTC m=+56.118916545 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.095195 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.095508 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.095527 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.095551 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.095569 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.199097 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.199158 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.199173 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.199199 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.199219 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.294990 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:50:19.864005169 +0000 UTC Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.302412 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.302495 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.302517 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.302542 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.302560 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.351979 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.352700 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.352767 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs podName:2c156b30-d262-4fdc-a70b-eb1703422f01 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:57.352747322 +0000 UTC m=+42.419868605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs") pod "network-metrics-daemon-xqs5z" (UID: "2c156b30-d262-4fdc-a70b-eb1703422f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.362241 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.362381 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.362738 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.362805 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.364464 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.364553 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.364759 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:55 crc kubenswrapper[4729]: E0127 06:47:55.365011 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.407871 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.407940 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.407958 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.407983 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.407997 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.510856 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.510912 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.510927 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.510948 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.510963 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.613468 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.613510 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.613519 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.613535 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.613548 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.671827 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45zq7" event={"ID":"15e81784-44b6-45c7-a893-4b38366a1b5e","Type":"ContainerStarted","Data":"0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.674906 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b8949c5-4022-49a3-af0d-2580921d3b18" containerID="57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987" exitCode=0 Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.674961 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" event={"ID":"3b8949c5-4022-49a3-af0d-2580921d3b18","Type":"ContainerDied","Data":"57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.677987 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.682502 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.682569 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.690249 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.703692 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.716819 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.716882 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.716900 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.716928 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.716947 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.736538 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.751751 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.765974 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.776710 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.794784 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.805419 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.815206 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.819249 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.819280 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.819297 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.819315 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.819327 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.828537 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.841307 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.852191 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.869116 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.881764 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.890610 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.909793 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.922696 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.922841 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.922861 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.922891 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.922908 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.924486 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.946908 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.958196 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.969770 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.977402 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.985269 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:55 crc kubenswrapper[4729]: I0127 06:47:55.999246 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.020445 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.025510 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.025785 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.025894 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.025995 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.026117 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.035762 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.047750 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.059253 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.066952 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.076947 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.085465 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.096758 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.106140 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.129038 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.129118 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.129132 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.129156 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.129191 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.232603 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.233247 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.233275 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.233312 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.233336 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.295395 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:11:58.554256958 +0000 UTC Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.336694 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.336744 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.336763 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.336789 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.336807 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.382337 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.401240 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.420895 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.440023 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.440412 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.440486 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.440414 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.440558 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.440805 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.453572 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.467113 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.481063 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.495777 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.506456 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.517781 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.533742 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.543612 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.543663 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.543676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.543696 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.543710 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.559499 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.591861 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.612729 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.630369 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.648032 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.648097 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.648108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.648130 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.648144 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.651894 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.688153 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b8949c5-4022-49a3-af0d-2580921d3b18" containerID="33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd" exitCode=0 Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.688201 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" event={"ID":"3b8949c5-4022-49a3-af0d-2580921d3b18","Type":"ContainerDied","Data":"33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd"} Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.713981 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.732755 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.747865 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.754843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.754878 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.754891 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.754913 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.754924 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.758813 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.772503 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.790203 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.807845 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.821199 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.836840 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.853088 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.862673 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.862713 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.862723 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.862742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.862766 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.874283 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.889670 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.906973 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.927368 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.938281 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.954545 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.966221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.966255 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.966264 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.966282 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4729]: I0127 06:47:56.966292 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.068861 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.068900 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.068911 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.068949 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.068959 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.172327 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.172790 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.172799 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.172814 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.172824 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.275227 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.275274 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.275283 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.275303 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.275315 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.296989 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:04:06.399899279 +0000 UTC Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.362643 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.362677 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.362712 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:57 crc kubenswrapper[4729]: E0127 06:47:57.362876 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.363311 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:57 crc kubenswrapper[4729]: E0127 06:47:57.363378 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:57 crc kubenswrapper[4729]: E0127 06:47:57.363520 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.363766 4729 scope.go:117] "RemoveContainer" containerID="65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066" Jan 27 06:47:57 crc kubenswrapper[4729]: E0127 06:47:57.363807 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.374867 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:57 crc kubenswrapper[4729]: E0127 06:47:57.375041 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:57 crc kubenswrapper[4729]: E0127 06:47:57.375177 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs podName:2c156b30-d262-4fdc-a70b-eb1703422f01 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:01.375144855 +0000 UTC m=+46.442266148 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs") pod "network-metrics-daemon-xqs5z" (UID: "2c156b30-d262-4fdc-a70b-eb1703422f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.386325 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.386399 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.386415 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.386440 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.386456 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.489545 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.489623 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.489645 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.489676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.489698 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.592477 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.592528 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.592541 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.592565 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.592633 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.696633 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.696713 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.696743 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.696777 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.696798 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.703254 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.711410 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.712001 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.714813 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/0.log" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.720792 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6" exitCode=1 Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.720944 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.724265 4729 scope.go:117] "RemoveContainer" containerID="279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.729197 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.729347 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b8949c5-4022-49a3-af0d-2580921d3b18" containerID="3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536" exitCode=0 Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.729409 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" event={"ID":"3b8949c5-4022-49a3-af0d-2580921d3b18","Type":"ContainerDied","Data":"3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.751230 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.768322 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.788742 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.799765 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.800000 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.800173 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.800302 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.800427 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.804118 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.833484 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.851691 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.868148 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.894731 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.903535 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.903934 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.904332 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.904625 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.904924 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.912207 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.922083 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.941094 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.954449 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.976889 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:57 crc kubenswrapper[4729]: I0127 06:47:57.995778 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.010095 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.010129 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.010138 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.010155 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.010166 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.010231 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.021820 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.039785 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.055854 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.079942 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.095316 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.109691 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.112720 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.112770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.112782 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.112809 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.112822 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.127376 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.145123 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.164428 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.179031 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.210485 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:57Z\\\",\\\"message\\\":\\\"1.Pod event handler 6\\\\nI0127 06:47:57.233777 5522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:57.233788 5522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:57.233796 5522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:57.233804 5522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:57.234893 5522 factory.go:656] Stopping watch factory\\\\nI0127 06:47:57.235051 5522 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:57.235371 5522 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235410 5522 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235744 5522 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:57.236002 5522 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.216041 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.216108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.216122 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.216139 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.216150 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.229752 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.249823 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.272520 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.298075 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:02:16.122970948 +0000 UTC Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.303682 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.318957 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.318985 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.318994 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.319009 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.319022 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.320300 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.421717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.421782 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.421798 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.421820 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.421835 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.524353 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.524384 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.524393 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.524407 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.524416 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.627089 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.627118 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.627129 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.627144 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.627155 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.730262 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.730291 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.730301 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.730317 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.730327 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.734881 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" event={"ID":"3b8949c5-4022-49a3-af0d-2580921d3b18","Type":"ContainerStarted","Data":"4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.737225 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/0.log" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.739509 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.739597 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.759769 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.777197 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.809848 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.831441 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.832785 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.832809 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.832817 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.832836 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.832846 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.843188 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.861353 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.879353 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.893601 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.905294 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.920634 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.934412 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.935490 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.935508 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.935516 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.935533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.935543 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.953617 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.966671 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.978714 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:58 crc kubenswrapper[4729]: I0127 06:47:58.996182 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:57Z\\\",\\\"message\\\":\\\"1.Pod event handler 6\\\\nI0127 06:47:57.233777 5522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:57.233788 5522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:57.233796 5522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:57.233804 5522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:57.234893 5522 factory.go:656] Stopping watch factory\\\\nI0127 06:47:57.235051 5522 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:57.235371 5522 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235410 5522 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235744 5522 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:57.236002 5522 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.011336 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.029153 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.038503 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.038555 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.038567 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.038585 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.038596 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.043207 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.056607 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.067982 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.086306 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.099656 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.116302 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.128480 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.141240 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.141278 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.141289 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.141310 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.141323 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.148668 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:57Z\\\",\\\"message\\\":\\\"1.Pod event handler 6\\\\nI0127 06:47:57.233777 5522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:57.233788 5522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:57.233796 5522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:57.233804 5522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:57.234893 5522 factory.go:656] Stopping watch factory\\\\nI0127 06:47:57.235051 5522 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:57.235371 5522 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235410 5522 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235744 5522 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:57.236002 5522 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.163559 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.178392 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.195592 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.212699 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.234701 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.245732 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.245777 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.245788 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.245806 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.245819 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.254052 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.273280 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.298939 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:27:08.689360049 +0000 UTC Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.348567 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.348627 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.348641 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.348661 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.348674 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.362095 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.362142 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.362055 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.362067 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:59 crc kubenswrapper[4729]: E0127 06:47:59.362319 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:59 crc kubenswrapper[4729]: E0127 06:47:59.362483 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:59 crc kubenswrapper[4729]: E0127 06:47:59.362610 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:47:59 crc kubenswrapper[4729]: E0127 06:47:59.362764 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.451569 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.451930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.452121 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.452349 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.452530 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.555093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.555140 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.555151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.555166 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.555177 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.658520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.658570 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.658587 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.658608 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.658624 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.746222 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/1.log" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.747288 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/0.log" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.751375 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b" exitCode=1 Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.752161 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.752255 4729 scope.go:117] "RemoveContainer" containerID="279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.753335 4729 scope.go:117] "RemoveContainer" containerID="7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b" Jan 27 06:47:59 crc kubenswrapper[4729]: E0127 06:47:59.753564 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\"" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.761689 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" event={"ID":"3b8949c5-4022-49a3-af0d-2580921d3b18","Type":"ContainerDied","Data":"4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.761441 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b8949c5-4022-49a3-af0d-2580921d3b18" containerID="4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104" exitCode=0 Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.761949 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.761972 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.761983 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.762002 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.762014 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.777159 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.799154 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.817050 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.831165 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.845594 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.859742 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.868206 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.868240 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.868253 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.868271 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.868286 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.876559 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.889415 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.906267 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.937949 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:57Z\\\",\\\"message\\\":\\\"1.Pod event handler 6\\\\nI0127 06:47:57.233777 5522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:57.233788 5522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:57.233796 5522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:57.233804 5522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:57.234893 5522 factory.go:656] Stopping watch factory\\\\nI0127 06:47:57.235051 5522 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:57.235371 5522 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235410 5522 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235744 5522 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:57.236002 5522 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:59Z\\\",\\\"message\\\":\\\"lversions/factory.go:141\\\\nI0127 06:47:59.028351 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:59.028367 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:59.028373 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:59.028379 6059 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:59.029226 6059 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:59.029268 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:59.029312 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:59.029341 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:59.029357 6059 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:59.029379 6059 factory.go:656] Stopping watch factory\\\\nI0127 06:47:59.029403 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:59.029408 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:59.029419 6059 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:59.029424 6059 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:47:59.029463 6059 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.954450 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.970823 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.971598 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.971651 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.971669 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.971695 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.971710 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4729]: I0127 06:47:59.988122 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:59Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.003251 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.017771 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.023650 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.023698 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.023710 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.023733 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.023747 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: E0127 06:48:00.036171 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.036202 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.039488 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.039523 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.039536 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.039556 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.039570 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.051487 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: E0127 06:48:00.052217 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.060421 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.060469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.060479 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.060500 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.060512 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.071122 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: E0127 06:48:00.074163 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.080305 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.080369 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.080383 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.080407 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.080422 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.083619 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: E0127 06:48:00.096750 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.096889 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.100883 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.100983 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.101042 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.101143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.101206 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: E0127 06:48:00.113701 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: E0127 06:48:00.114096 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.116087 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.116178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.116237 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.116313 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.116397 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.117333 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.134492 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.149111 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.162446 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.182675 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.200393 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.219125 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.221396 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.221444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.221456 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.221477 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.221492 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.245448 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:57Z\\\",\\\"message\\\":\\\"1.Pod event handler 6\\\\nI0127 06:47:57.233777 5522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:57.233788 5522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:57.233796 5522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:57.233804 5522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:57.234893 5522 factory.go:656] Stopping watch factory\\\\nI0127 06:47:57.235051 5522 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:57.235371 5522 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235410 5522 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235744 5522 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:57.236002 5522 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:59Z\\\",\\\"message\\\":\\\"lversions/factory.go:141\\\\nI0127 06:47:59.028351 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:59.028367 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:59.028373 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:59.028379 6059 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:59.029226 6059 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:59.029268 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:59.029312 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:59.029341 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:59.029357 6059 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:59.029379 6059 factory.go:656] Stopping watch factory\\\\nI0127 06:47:59.029403 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:59.029408 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:59.029419 6059 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:59.029424 6059 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:47:59.029463 6059 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.261042 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.285357 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.299212 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:35:27.353036379 +0000 UTC Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.299314 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.317333 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.324313 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.324347 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.324358 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.324373 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.324384 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.426994 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.427049 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.427059 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.427159 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.427172 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.530429 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.530479 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.530492 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.530509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.530522 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.633650 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.634121 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.634310 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.634498 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.634661 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.738977 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.739047 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.739098 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.739138 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.739160 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.768368 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/1.log" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.776630 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b8949c5-4022-49a3-af0d-2580921d3b18" containerID="05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d" exitCode=0 Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.776707 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" event={"ID":"3b8949c5-4022-49a3-af0d-2580921d3b18","Type":"ContainerDied","Data":"05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.795065 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.826515 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.842829 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.842895 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.842909 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.842935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.842959 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.845324 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.865539 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.883933 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.901848 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.917104 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.935626 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.949434 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.949480 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.949491 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.949511 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.949523 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.959756 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.972847 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4729]: I0127 06:48:00.989044 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.008814 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.023503 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.039479 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.051786 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.051817 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.051826 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.051843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.051856 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.057337 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:57Z\\\",\\\"message\\\":\\\"1.Pod event handler 6\\\\nI0127 06:47:57.233777 5522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:57.233788 5522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:57.233796 5522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:57.233804 5522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:57.234893 5522 factory.go:656] Stopping watch factory\\\\nI0127 06:47:57.235051 5522 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:57.235371 5522 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235410 5522 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235744 5522 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:57.236002 5522 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:59Z\\\",\\\"message\\\":\\\"lversions/factory.go:141\\\\nI0127 06:47:59.028351 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:59.028367 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:59.028373 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:59.028379 6059 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:59.029226 6059 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:59.029268 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:59.029312 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:59.029341 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:59.029357 6059 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:59.029379 6059 factory.go:656] Stopping watch factory\\\\nI0127 06:47:59.029403 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:59.029408 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:59.029419 6059 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:59.029424 6059 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:47:59.029463 6059 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.071294 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.159964 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.160014 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.160023 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.160041 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.160054 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.264392 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.264466 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.264489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.264518 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.264536 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.300686 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:34:19.747054472 +0000 UTC Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.362383 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.362542 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.362594 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:01 crc kubenswrapper[4729]: E0127 06:48:01.362588 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.362546 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:01 crc kubenswrapper[4729]: E0127 06:48:01.362783 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:01 crc kubenswrapper[4729]: E0127 06:48:01.363025 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:01 crc kubenswrapper[4729]: E0127 06:48:01.363250 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.369389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.369453 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.369478 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.369510 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.369534 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.418312 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:01 crc kubenswrapper[4729]: E0127 06:48:01.418595 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:48:01 crc kubenswrapper[4729]: E0127 06:48:01.418965 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs podName:2c156b30-d262-4fdc-a70b-eb1703422f01 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:09.418929291 +0000 UTC m=+54.486050594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs") pod "network-metrics-daemon-xqs5z" (UID: "2c156b30-d262-4fdc-a70b-eb1703422f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.473146 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.473225 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.473249 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.473284 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.473307 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.577598 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.577677 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.577703 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.577742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.577764 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.681442 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.681516 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.681534 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.681565 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.681584 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.783521 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.783828 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.783994 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.784192 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.784319 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.784788 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" event={"ID":"3b8949c5-4022-49a3-af0d-2580921d3b18","Type":"ContainerStarted","Data":"aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.803459 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.820577 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.835247 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.854253 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.881625 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:57Z\\\",\\\"message\\\":\\\"1.Pod event handler 6\\\\nI0127 06:47:57.233777 5522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:57.233788 5522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:57.233796 5522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:57.233804 5522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:57.234893 5522 factory.go:656] Stopping watch factory\\\\nI0127 06:47:57.235051 5522 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:57.235371 5522 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235410 5522 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235744 5522 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:57.236002 5522 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:59Z\\\",\\\"message\\\":\\\"lversions/factory.go:141\\\\nI0127 06:47:59.028351 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:59.028367 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:59.028373 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:59.028379 6059 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:59.029226 6059 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:59.029268 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:59.029312 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:59.029341 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:59.029357 6059 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:59.029379 6059 factory.go:656] Stopping watch factory\\\\nI0127 06:47:59.029403 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:59.029408 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:59.029419 6059 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:59.029424 6059 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:47:59.029463 6059 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.887243 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.887293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.887307 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.887330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.887346 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.895604 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.911415 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.931479 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.947691 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.968148 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.983480 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.989522 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.989556 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.989568 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.989589 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.989602 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4729]: I0127 06:48:01.999132 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.020965 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.038110 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.058251 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.075160 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.092098 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.092432 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.092588 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.092743 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.092910 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.196824 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.196889 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.196913 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.196944 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.196966 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.300757 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.300828 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.300845 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.300870 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.300887 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.300892 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:03:09.248805235 +0000 UTC Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.404166 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.404225 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.404242 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.404268 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.404286 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.507793 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.507848 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.507864 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.507892 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.507911 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.611485 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.611549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.611567 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.611595 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.611612 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.714815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.715175 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.715193 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.715223 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.715247 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.818010 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.818046 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.818055 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.818093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.818108 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.921318 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.921379 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.921396 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.921423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4729]: I0127 06:48:02.921441 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.025811 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.025870 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.025880 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.025903 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.025916 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.129993 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.130057 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.130099 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.130128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.130145 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.233867 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.233950 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.233974 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.234009 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.234032 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.301515 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:54:06.671385476 +0000 UTC Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.337655 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.337712 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.337721 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.337743 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.337757 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.362351 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.362440 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.362479 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.362577 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:03 crc kubenswrapper[4729]: E0127 06:48:03.362572 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:03 crc kubenswrapper[4729]: E0127 06:48:03.362695 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:03 crc kubenswrapper[4729]: E0127 06:48:03.362912 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:03 crc kubenswrapper[4729]: E0127 06:48:03.363147 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.440576 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.440633 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.440654 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.440702 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.440722 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.544204 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.544269 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.544281 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.544301 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.544312 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.647799 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.647879 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.647899 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.647940 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.647960 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.751889 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.751961 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.751972 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.751994 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.752006 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.854979 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.855038 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.855047 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.855084 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.855097 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.958406 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.958454 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.958463 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.958481 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4729]: I0127 06:48:03.958493 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.062042 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.062152 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.062170 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.062198 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.062219 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.165019 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.165113 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.165134 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.165162 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.165180 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.268135 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.268188 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.268201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.268220 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.268234 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.302728 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:45:40.770872993 +0000 UTC Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.369921 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.369961 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.369972 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.369987 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.370001 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.473095 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.473133 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.473142 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.473161 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.473172 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.575568 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.575615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.575624 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.575641 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.575655 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.678708 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.678771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.678790 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.678817 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.678835 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.781300 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.781366 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.781384 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.781412 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.781429 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.883787 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.883835 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.883847 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.883869 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.883881 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.987143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.987222 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.987245 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.987281 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4729]: I0127 06:48:04.987305 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.101016 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.101100 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.101117 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.101142 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.101160 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:05Z","lastTransitionTime":"2026-01-27T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.204665 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.204743 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.204765 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.204798 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.204816 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:05Z","lastTransitionTime":"2026-01-27T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.303298 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:47:18.71875633 +0000 UTC Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.308795 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.308869 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.308888 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.308919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.308938 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:05Z","lastTransitionTime":"2026-01-27T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.361651 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.361700 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.361750 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.361682 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:05 crc kubenswrapper[4729]: E0127 06:48:05.361958 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:05 crc kubenswrapper[4729]: E0127 06:48:05.362240 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:05 crc kubenswrapper[4729]: E0127 06:48:05.362366 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:05 crc kubenswrapper[4729]: E0127 06:48:05.362479 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.412401 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.412459 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.412479 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.412506 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.412524 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:05Z","lastTransitionTime":"2026-01-27T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.515130 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.515189 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.515200 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.515218 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.515228 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:05Z","lastTransitionTime":"2026-01-27T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.618540 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.618609 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.618631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.618663 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.618685 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:05Z","lastTransitionTime":"2026-01-27T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.721971 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.722044 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.722104 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.722136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.722158 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:05Z","lastTransitionTime":"2026-01-27T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.824413 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.824475 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.824489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.824507 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.824540 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:05Z","lastTransitionTime":"2026-01-27T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.928755 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.928805 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.928815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.928834 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:05 crc kubenswrapper[4729]: I0127 06:48:05.928846 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:05Z","lastTransitionTime":"2026-01-27T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.031647 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.031700 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.031712 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.031733 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.031746 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:06Z","lastTransitionTime":"2026-01-27T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.135515 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.135578 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.135596 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.135621 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.135639 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:06Z","lastTransitionTime":"2026-01-27T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.239342 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.239396 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.239412 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.239439 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.239458 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:06Z","lastTransitionTime":"2026-01-27T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.303989 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:51:41.06322804 +0000 UTC Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.342778 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.342852 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.342870 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.342901 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.342919 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:06Z","lastTransitionTime":"2026-01-27T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.386381 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.407601 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.424338 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.440674 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.451477 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.451539 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.451551 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.451571 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.451586 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:06Z","lastTransitionTime":"2026-01-27T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.461688 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.482111 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.495117 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.511323 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.535371 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.556246 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.556313 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.556336 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.556368 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.556391 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:06Z","lastTransitionTime":"2026-01-27T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.556532 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.580145 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.617438 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279f429c99e1fa2b9b31830bae290013492b6433b8f24ffdf201d64e9e5021c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:57Z\\\",\\\"message\\\":\\\"1.Pod event handler 6\\\\nI0127 06:47:57.233777 5522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:57.233788 5522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:57.233796 5522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:57.233804 5522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:57.234893 5522 factory.go:656] Stopping watch factory\\\\nI0127 06:47:57.235051 5522 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:57.235371 5522 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235410 5522 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:47:57.235744 5522 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:57.236002 5522 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:59Z\\\",\\\"message\\\":\\\"lversions/factory.go:141\\\\nI0127 06:47:59.028351 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:59.028367 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:59.028373 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:59.028379 6059 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:59.029226 6059 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:59.029268 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:59.029312 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:59.029341 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:59.029357 6059 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:59.029379 6059 factory.go:656] Stopping watch factory\\\\nI0127 06:47:59.029403 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:59.029408 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:59.029419 6059 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:59.029424 6059 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:47:59.029463 6059 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.637935 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.652256 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.658582 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.658607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.658615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.658631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.658641 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:06Z","lastTransitionTime":"2026-01-27T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.665997 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.684393 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.762345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.762409 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.762429 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.762459 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.762478 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:06Z","lastTransitionTime":"2026-01-27T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.866287 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.866344 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.866361 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.866387 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.866403 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:06Z","lastTransitionTime":"2026-01-27T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.969787 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.969837 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.969854 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.969879 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:06 crc kubenswrapper[4729]: I0127 06:48:06.969896 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:06Z","lastTransitionTime":"2026-01-27T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.072870 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.072938 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.072950 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.072970 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.072983 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:07Z","lastTransitionTime":"2026-01-27T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.176020 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.176090 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.176104 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.176126 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.176139 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:07Z","lastTransitionTime":"2026-01-27T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.279528 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.279577 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.279586 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.279603 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.279613 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:07Z","lastTransitionTime":"2026-01-27T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.304316 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:04:57.90022474 +0000 UTC Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.362057 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.362057 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.362057 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.362234 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:07 crc kubenswrapper[4729]: E0127 06:48:07.362419 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:07 crc kubenswrapper[4729]: E0127 06:48:07.362482 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:07 crc kubenswrapper[4729]: E0127 06:48:07.362612 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:07 crc kubenswrapper[4729]: E0127 06:48:07.362747 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.382387 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.382451 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.382469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.382497 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.382515 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:07Z","lastTransitionTime":"2026-01-27T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.485303 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.485370 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.485386 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.485414 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.485432 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:07Z","lastTransitionTime":"2026-01-27T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.589255 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.589394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.589457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.589533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.589593 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:07Z","lastTransitionTime":"2026-01-27T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.692087 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.692151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.692165 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.692186 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.692201 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:07Z","lastTransitionTime":"2026-01-27T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.796052 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.796164 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.796185 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.796214 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.796232 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:07Z","lastTransitionTime":"2026-01-27T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.899843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.900342 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.900533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.900726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:07 crc kubenswrapper[4729]: I0127 06:48:07.900879 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:07Z","lastTransitionTime":"2026-01-27T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.003826 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.003886 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.003904 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.003929 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.003946 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:08Z","lastTransitionTime":"2026-01-27T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.107353 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.107404 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.107414 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.107433 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.107446 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:08Z","lastTransitionTime":"2026-01-27T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.210138 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.210184 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.210194 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.210212 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.210225 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:08Z","lastTransitionTime":"2026-01-27T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.305546 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:46:30.299236679 +0000 UTC Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.312848 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.312916 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.312935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.312968 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.312991 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:08Z","lastTransitionTime":"2026-01-27T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.415945 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.416005 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.416023 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.416054 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.416102 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:08Z","lastTransitionTime":"2026-01-27T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.527806 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.527880 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.527902 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.527935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.527956 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:08Z","lastTransitionTime":"2026-01-27T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.631039 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.631149 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.631173 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.631204 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.631254 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:08Z","lastTransitionTime":"2026-01-27T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.734779 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.734832 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.734844 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.734861 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.734872 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:08Z","lastTransitionTime":"2026-01-27T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.839246 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.839661 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.839862 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.840112 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.840554 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:08Z","lastTransitionTime":"2026-01-27T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.943599 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.943696 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.943716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.943750 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:08 crc kubenswrapper[4729]: I0127 06:48:08.943769 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:08Z","lastTransitionTime":"2026-01-27T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.046967 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.047037 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.047062 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.047125 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.047149 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:09Z","lastTransitionTime":"2026-01-27T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.150374 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.150444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.150467 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.150500 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.150523 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:09Z","lastTransitionTime":"2026-01-27T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.253812 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.253872 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.253895 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.253930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.253949 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:09Z","lastTransitionTime":"2026-01-27T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.306640 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:21:16.765135803 +0000 UTC Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.357339 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.357397 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.357418 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.357449 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.357466 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:09Z","lastTransitionTime":"2026-01-27T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.361644 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:09 crc kubenswrapper[4729]: E0127 06:48:09.361946 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.361977 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.362053 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:09 crc kubenswrapper[4729]: E0127 06:48:09.362182 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.362242 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:09 crc kubenswrapper[4729]: E0127 06:48:09.362312 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:09 crc kubenswrapper[4729]: E0127 06:48:09.362484 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.460435 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.460487 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.460499 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.460524 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.460538 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:09Z","lastTransitionTime":"2026-01-27T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.515044 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:09 crc kubenswrapper[4729]: E0127 06:48:09.515496 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:48:09 crc kubenswrapper[4729]: E0127 06:48:09.515647 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs podName:2c156b30-d262-4fdc-a70b-eb1703422f01 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:25.515620305 +0000 UTC m=+70.582741558 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs") pod "network-metrics-daemon-xqs5z" (UID: "2c156b30-d262-4fdc-a70b-eb1703422f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.564266 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.564312 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.564322 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.564342 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.564355 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:09Z","lastTransitionTime":"2026-01-27T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.602573 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.604301 4729 scope.go:117] "RemoveContainer" containerID="7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.646455 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.667241 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.667303 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.667326 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.667361 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.667380 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:09Z","lastTransitionTime":"2026-01-27T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.690029 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.708324 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.722586 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.734933 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.746515 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.762181 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.770311 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.770357 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.770369 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.770388 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.770401 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:09Z","lastTransitionTime":"2026-01-27T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.778040 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.791749 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.803497 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.814847 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/1.log" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.816943 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.818010 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.826628 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:59Z\\\",\\\"message\\\":\\\"lversions/factory.go:141\\\\nI0127 06:47:59.028351 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:59.028367 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:59.028373 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:59.028379 6059 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:59.029226 6059 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:59.029268 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:59.029312 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:59.029341 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:59.029357 6059 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:59.029379 6059 factory.go:656] Stopping watch factory\\\\nI0127 06:47:59.029403 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:59.029408 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:59.029419 6059 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:59.029424 6059 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:47:59.029463 6059 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.840856 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.857698 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.873168 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.873204 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.873212 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.873231 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.873241 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:09Z","lastTransitionTime":"2026-01-27T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.881274 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.895943 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.912877 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.935949 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.951630 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.963612 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.975505 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.975570 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.975589 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.975614 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.975631 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:09Z","lastTransitionTime":"2026-01-27T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.978940 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:09 crc kubenswrapper[4729]: I0127 06:48:09.992829 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.008665 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.026916 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.046885 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.064393 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.078162 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.078212 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.078226 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.078251 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.078267 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.083691 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.100456 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.125003 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:59Z\\\",\\\"message\\\":\\\"lversions/factory.go:141\\\\nI0127 06:47:59.028351 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:59.028367 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:59.028373 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:59.028379 6059 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:59.029226 6059 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:59.029268 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:59.029312 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:59.029341 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:59.029357 6059 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:59.029379 6059 factory.go:656] Stopping watch factory\\\\nI0127 06:47:59.029403 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:59.029408 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:59.029419 6059 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:59.029424 6059 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:47:59.029463 6059 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.143004 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.155946 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.165966 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.181276 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.181323 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.181336 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.181357 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.181371 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.182436 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.226444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.226494 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.226503 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.226520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.226531 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: E0127 06:48:10.239150 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.244404 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.244483 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.244495 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.244519 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.244532 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.251290 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:48:10 crc kubenswrapper[4729]: E0127 06:48:10.256862 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.262428 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.262475 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.262490 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.262512 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.262525 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.265515 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.267753 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 06:48:10 crc kubenswrapper[4729]: E0127 06:48:10.277894 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.281259 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.282805 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.282840 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.282857 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.282880 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.282915 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: E0127 06:48:10.296667 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.297541 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.300572 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.300600 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.300613 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.300632 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.300643 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.307392 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:17:52.593282629 +0000 UTC Jan 27 06:48:10 crc kubenswrapper[4729]: E0127 06:48:10.314166 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: E0127 06:48:10.314281 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.314271 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.316687 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.316725 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.316736 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.316757 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.316768 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.334047 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:59Z\\\",\\\"message\\\":\\\"lversions/factory.go:141\\\\nI0127 06:47:59.028351 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:59.028367 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:59.028373 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:59.028379 6059 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:59.029226 6059 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:59.029268 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:59.029312 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:59.029341 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:59.029357 6059 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:59.029379 6059 factory.go:656] Stopping watch factory\\\\nI0127 06:47:59.029403 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:59.029408 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:59.029419 6059 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:59.029424 6059 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:47:59.029463 6059 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.345314 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.354184 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.371555 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.392238 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.405041 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.419300 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.419328 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.419336 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.419351 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.419374 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.419636 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.430370 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.444972 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.457507 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.472378 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.484344 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.522832 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.523054 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.523160 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.523242 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.523355 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.626035 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.626094 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.626106 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.626128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.626140 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.728636 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.728851 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.728965 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.729049 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.729157 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.824133 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/2.log" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.825042 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/1.log" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.829725 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575" exitCode=1 Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.829822 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.829894 4729 scope.go:117] "RemoveContainer" containerID="7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.831233 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.831390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.831486 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.831571 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.831702 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.831720 4729 scope.go:117] "RemoveContainer" containerID="382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575" Jan 27 06:48:10 crc kubenswrapper[4729]: E0127 06:48:10.832374 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\"" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.853172 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.873177 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.893727 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.910670 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.933174 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.938969 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.939039 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.939061 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.939128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.939152 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:10Z","lastTransitionTime":"2026-01-27T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.951603 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.978427 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6e7c9444e1d529a60196cd50a46189fe91a8e3f5f3e75990772ccef99f560b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:59Z\\\",\\\"message\\\":\\\"lversions/factory.go:141\\\\nI0127 06:47:59.028351 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:59.028367 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:59.028373 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:59.028379 6059 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:59.029226 6059 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:59.029268 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:59.029312 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:59.029341 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:59.029357 6059 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:59.029379 6059 factory.go:656] Stopping watch factory\\\\nI0127 06:47:59.029403 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:59.029408 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:59.029419 6059 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:59.029424 6059 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:47:59.029463 6059 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"shift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nI0127 06:48:10.457512 6263 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 11.64µs\\\\nI0127 06:48:10.457519 6263 services_controller.go:356] Processing sync for service default/kubernetes for network=default\\\\nI0127 06:48:10.457524 6263 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457535 6263 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457545 6263 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 06:48:10.457552 6263 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 06:48:10.457558 6263 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457571 6263 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:48:10.457643 6263 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:10 crc kubenswrapper[4729]: I0127 06:48:10.995532 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.014356 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.065972 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.066195 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066337 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:43.066314052 +0000 UTC m=+88.133435315 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.066415 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.066456 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.066492 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.066532 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066579 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066597 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066631 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:43.066624382 +0000 UTC m=+88.133745645 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066645 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:43.066640162 +0000 UTC m=+88.133761425 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066686 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066716 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066737 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066731 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066776 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066793 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066799 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:43.066782107 +0000 UTC m=+88.133903390 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.066867 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:43.066841309 +0000 UTC m=+88.133962582 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.068011 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.068044 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.068057 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.068094 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.068108 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:11Z","lastTransitionTime":"2026-01-27T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.084846 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.099738 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.120719 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.142850 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.161503 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.171628 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.171681 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.171698 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.171720 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.171745 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:11Z","lastTransitionTime":"2026-01-27T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.181194 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.197392 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.274432 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.274508 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.274532 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.274575 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.274595 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:11Z","lastTransitionTime":"2026-01-27T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.308038 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:10:06.495506663 +0000 UTC Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.362319 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.362430 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.362365 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.362559 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.362695 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.362879 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.363028 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.363215 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.377200 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.377247 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.377256 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.377273 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.377283 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:11Z","lastTransitionTime":"2026-01-27T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.479671 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.479727 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.479744 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.479766 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.479777 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:11Z","lastTransitionTime":"2026-01-27T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.582694 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.582756 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.582775 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.582809 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.582828 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:11Z","lastTransitionTime":"2026-01-27T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.691681 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.691743 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.691755 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.691777 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.691790 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:11Z","lastTransitionTime":"2026-01-27T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.795675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.795781 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.795800 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.795860 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.795880 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:11Z","lastTransitionTime":"2026-01-27T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.835243 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/2.log" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.839348 4729 scope.go:117] "RemoveContainer" containerID="382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575" Jan 27 06:48:11 crc kubenswrapper[4729]: E0127 06:48:11.839551 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\"" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.854184 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.867923 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.881480 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.895864 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.899131 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.899197 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.899221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.899254 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.899280 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:11Z","lastTransitionTime":"2026-01-27T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.909324 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.921518 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.932409 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.946269 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.963486 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"shift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nI0127 06:48:10.457512 6263 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 11.64µs\\\\nI0127 06:48:10.457519 6263 services_controller.go:356] Processing sync for service default/kubernetes for network=default\\\\nI0127 06:48:10.457524 6263 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457535 6263 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457545 6263 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 06:48:10.457552 6263 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 06:48:10.457558 6263 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457571 6263 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:48:10.457643 6263 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.976519 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.982773 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:48:11 crc kubenswrapper[4729]: I0127 06:48:11.989164 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.002801 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.002875 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.002888 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.002910 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.002942 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:12Z","lastTransitionTime":"2026-01-27T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.003534 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.013649 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.025465 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.036178 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.045468 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.056597 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.067683 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.079269 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.093740 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.106092 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.106153 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.106173 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.106200 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.106219 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:12Z","lastTransitionTime":"2026-01-27T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.110823 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.130947 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"shift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nI0127 06:48:10.457512 6263 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 11.64µs\\\\nI0127 06:48:10.457519 6263 services_controller.go:356] Processing sync for service default/kubernetes for network=default\\\\nI0127 06:48:10.457524 6263 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457535 6263 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457545 6263 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 06:48:10.457552 6263 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 06:48:10.457558 6263 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457571 6263 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:48:10.457643 6263 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.194126 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.208784 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.208833 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.208849 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.208867 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.208770 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.208878 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:12Z","lastTransitionTime":"2026-01-27T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.224001 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.237181 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.249382 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.265175 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.275046 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.289418 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.302017 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.308517 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:40:57.48436035 +0000 UTC Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.311241 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.311273 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.311283 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.311302 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.311316 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:12Z","lastTransitionTime":"2026-01-27T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.317315 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.328494 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.342667 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.413058 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.413128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.413145 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.413169 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.413184 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:12Z","lastTransitionTime":"2026-01-27T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.517317 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.517413 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.517433 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.517458 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.517478 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:12Z","lastTransitionTime":"2026-01-27T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.621391 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.621714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.621895 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.622047 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.622294 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:12Z","lastTransitionTime":"2026-01-27T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.725396 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.725449 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.725465 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.725490 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.725507 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:12Z","lastTransitionTime":"2026-01-27T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.828768 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.828883 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.828901 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.828926 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.828943 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:12Z","lastTransitionTime":"2026-01-27T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.932170 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.932242 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.932260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.932290 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:12 crc kubenswrapper[4729]: I0127 06:48:12.932307 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:12Z","lastTransitionTime":"2026-01-27T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.035700 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.035761 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.035772 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.035796 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.035808 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:13Z","lastTransitionTime":"2026-01-27T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.138801 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.138942 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.138955 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.138972 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.138984 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:13Z","lastTransitionTime":"2026-01-27T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.242112 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.242160 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.242180 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.242201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.242213 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:13Z","lastTransitionTime":"2026-01-27T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.309026 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:04:18.553251731 +0000 UTC Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.344846 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.344915 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.344938 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.344970 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.344994 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:13Z","lastTransitionTime":"2026-01-27T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.361667 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.361704 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.361704 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.361755 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:13 crc kubenswrapper[4729]: E0127 06:48:13.361884 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:13 crc kubenswrapper[4729]: E0127 06:48:13.362040 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:13 crc kubenswrapper[4729]: E0127 06:48:13.362279 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:13 crc kubenswrapper[4729]: E0127 06:48:13.362379 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.448027 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.448123 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.448148 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.448180 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.448204 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:13Z","lastTransitionTime":"2026-01-27T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.551143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.551178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.551188 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.551205 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.551218 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:13Z","lastTransitionTime":"2026-01-27T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.654311 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.654360 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.654375 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.654400 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.654418 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:13Z","lastTransitionTime":"2026-01-27T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.757037 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.757145 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.757206 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.757238 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.757256 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:13Z","lastTransitionTime":"2026-01-27T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.860753 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.860818 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.860836 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.860863 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.860884 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:13Z","lastTransitionTime":"2026-01-27T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.964454 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.964528 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.964549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.964578 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:13 crc kubenswrapper[4729]: I0127 06:48:13.964596 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:13Z","lastTransitionTime":"2026-01-27T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.067468 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.067518 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.067530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.067553 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.067567 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:14Z","lastTransitionTime":"2026-01-27T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.171018 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.171106 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.171130 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.171159 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.171179 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:14Z","lastTransitionTime":"2026-01-27T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.274688 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.274767 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.274786 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.274817 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.274866 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:14Z","lastTransitionTime":"2026-01-27T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.309752 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:43:06.751446909 +0000 UTC Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.377836 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.377884 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.377896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.377913 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.377926 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:14Z","lastTransitionTime":"2026-01-27T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.481480 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.481526 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.481537 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.481559 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.481573 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:14Z","lastTransitionTime":"2026-01-27T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.583906 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.583961 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.583977 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.584001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.584018 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:14Z","lastTransitionTime":"2026-01-27T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.687454 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.687525 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.687549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.687579 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.687600 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:14Z","lastTransitionTime":"2026-01-27T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.790905 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.790996 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.791014 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.791049 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.791101 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:14Z","lastTransitionTime":"2026-01-27T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.893871 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.893929 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.893940 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.893962 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.893972 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:14Z","lastTransitionTime":"2026-01-27T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.996066 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.996123 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.996131 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.996148 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:14 crc kubenswrapper[4729]: I0127 06:48:14.996157 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:14Z","lastTransitionTime":"2026-01-27T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.100639 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.100713 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.100737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.100769 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.100797 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:15Z","lastTransitionTime":"2026-01-27T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.203983 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.204037 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.204050 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.204088 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.204101 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:15Z","lastTransitionTime":"2026-01-27T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.307997 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.308046 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.308060 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.308099 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.308114 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:15Z","lastTransitionTime":"2026-01-27T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.310270 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:48:15.105444121 +0000 UTC Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.362216 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:15 crc kubenswrapper[4729]: E0127 06:48:15.362427 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.362534 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:15 crc kubenswrapper[4729]: E0127 06:48:15.362759 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.362534 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.362557 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:15 crc kubenswrapper[4729]: E0127 06:48:15.363150 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:15 crc kubenswrapper[4729]: E0127 06:48:15.363246 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.411971 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.412051 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.412108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.412139 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.412160 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:15Z","lastTransitionTime":"2026-01-27T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.515703 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.515761 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.515779 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.515806 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.515823 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:15Z","lastTransitionTime":"2026-01-27T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.618685 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.618717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.618727 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.618745 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.618754 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:15Z","lastTransitionTime":"2026-01-27T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.721847 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.722246 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.722383 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.722597 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.722741 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:15Z","lastTransitionTime":"2026-01-27T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.825905 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.825956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.825967 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.825987 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.826001 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:15Z","lastTransitionTime":"2026-01-27T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.929933 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.929988 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.930005 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.930032 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:15 crc kubenswrapper[4729]: I0127 06:48:15.930054 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:15Z","lastTransitionTime":"2026-01-27T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.034001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.034099 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.034113 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.034133 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.034147 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:16Z","lastTransitionTime":"2026-01-27T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.137454 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.137523 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.137543 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.137572 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.137591 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:16Z","lastTransitionTime":"2026-01-27T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.240574 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.240631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.240649 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.240677 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.240697 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:16Z","lastTransitionTime":"2026-01-27T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.310618 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:00:11.822059102 +0000 UTC Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.343675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.343771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.343804 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.343843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.343874 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:16Z","lastTransitionTime":"2026-01-27T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.381930 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.398849 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.416045 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.429465 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.455435 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.455482 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.455676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.455711 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.455721 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:16Z","lastTransitionTime":"2026-01-27T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.455842 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"shift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nI0127 06:48:10.457512 6263 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 11.64µs\\\\nI0127 06:48:10.457519 6263 services_controller.go:356] Processing sync for service default/kubernetes for network=default\\\\nI0127 06:48:10.457524 6263 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457535 6263 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457545 6263 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 06:48:10.457552 6263 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 06:48:10.457558 6263 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457571 6263 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:48:10.457643 6263 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.470438 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.484529 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.497666 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.509829 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.522064 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.539264 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.552511 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.559452 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.559540 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.559557 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.559612 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.559628 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:16Z","lastTransitionTime":"2026-01-27T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.568619 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.589329 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.606189 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.620421 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.637473 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.662830 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.662904 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.662922 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.662956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.662976 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:16Z","lastTransitionTime":"2026-01-27T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.766410 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.766534 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.766556 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.766594 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.766614 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:16Z","lastTransitionTime":"2026-01-27T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.869223 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.869598 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.869677 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.869742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.869819 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:16Z","lastTransitionTime":"2026-01-27T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.972822 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.973160 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.973245 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.973358 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:16 crc kubenswrapper[4729]: I0127 06:48:16.973427 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:16Z","lastTransitionTime":"2026-01-27T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.077197 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.077253 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.077276 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.077308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.077328 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:17Z","lastTransitionTime":"2026-01-27T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.180413 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.180471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.180490 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.180515 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.180534 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:17Z","lastTransitionTime":"2026-01-27T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.284207 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.284286 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.284305 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.284335 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.284354 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:17Z","lastTransitionTime":"2026-01-27T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.311647 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:14:22.433465822 +0000 UTC Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.362182 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.362262 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.362192 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:17 crc kubenswrapper[4729]: E0127 06:48:17.362410 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.362182 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:17 crc kubenswrapper[4729]: E0127 06:48:17.362677 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:17 crc kubenswrapper[4729]: E0127 06:48:17.363103 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:17 crc kubenswrapper[4729]: E0127 06:48:17.362891 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.387482 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.387769 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.387881 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.388007 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.388136 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:17Z","lastTransitionTime":"2026-01-27T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.490748 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.490797 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.490808 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.490827 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.490839 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:17Z","lastTransitionTime":"2026-01-27T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.593192 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.593522 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.593618 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.593722 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.593820 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:17Z","lastTransitionTime":"2026-01-27T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.697980 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.698056 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.698126 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.698160 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.698191 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:17Z","lastTransitionTime":"2026-01-27T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.801422 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.801577 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.801601 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.801631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.801648 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:17Z","lastTransitionTime":"2026-01-27T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.904746 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.904808 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.904821 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.904843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:17 crc kubenswrapper[4729]: I0127 06:48:17.904855 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:17Z","lastTransitionTime":"2026-01-27T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.007560 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.007602 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.007615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.007629 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.007639 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:18Z","lastTransitionTime":"2026-01-27T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.110596 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.110642 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.110704 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.110730 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.110744 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:18Z","lastTransitionTime":"2026-01-27T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.213677 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.213741 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.213754 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.213771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.213783 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:18Z","lastTransitionTime":"2026-01-27T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.312438 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:01:36.530322737 +0000 UTC Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.317633 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.317684 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.317702 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.317733 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.317753 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:18Z","lastTransitionTime":"2026-01-27T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.421620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.421673 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.421689 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.421715 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.421733 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:18Z","lastTransitionTime":"2026-01-27T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.526145 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.526215 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.526234 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.526267 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.526286 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:18Z","lastTransitionTime":"2026-01-27T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.630182 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.630234 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.630251 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.630275 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.630291 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:18Z","lastTransitionTime":"2026-01-27T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.733495 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.733546 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.733672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.733700 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.733736 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:18Z","lastTransitionTime":"2026-01-27T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.837209 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.837260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.837278 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.837306 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.837324 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:18Z","lastTransitionTime":"2026-01-27T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.941044 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.941195 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.941215 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.941241 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:18 crc kubenswrapper[4729]: I0127 06:48:18.941260 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:18Z","lastTransitionTime":"2026-01-27T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.044742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.044802 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.044820 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.044846 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.044863 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:19Z","lastTransitionTime":"2026-01-27T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.148535 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.148600 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.148622 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.148650 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.148667 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:19Z","lastTransitionTime":"2026-01-27T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.254188 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.254661 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.254862 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.255062 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.255303 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:19Z","lastTransitionTime":"2026-01-27T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.313503 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 10:27:07.629761317 +0000 UTC Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.359319 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.359375 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.359394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.359420 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.359439 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:19Z","lastTransitionTime":"2026-01-27T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.361622 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:19 crc kubenswrapper[4729]: E0127 06:48:19.361805 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.362179 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:19 crc kubenswrapper[4729]: E0127 06:48:19.362458 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.362620 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.362681 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:19 crc kubenswrapper[4729]: E0127 06:48:19.362876 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:19 crc kubenswrapper[4729]: E0127 06:48:19.363061 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.462652 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.462701 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.462711 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.462754 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.462766 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:19Z","lastTransitionTime":"2026-01-27T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.565209 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.565241 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.565250 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.565265 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.565274 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:19Z","lastTransitionTime":"2026-01-27T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.668029 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.668091 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.668103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.668116 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.668125 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:19Z","lastTransitionTime":"2026-01-27T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.771314 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.771376 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.771389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.771411 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.771425 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:19Z","lastTransitionTime":"2026-01-27T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.873182 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.873257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.873267 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.873281 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.873292 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:19Z","lastTransitionTime":"2026-01-27T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.975747 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.975785 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.975795 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.975812 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:19 crc kubenswrapper[4729]: I0127 06:48:19.975822 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:19Z","lastTransitionTime":"2026-01-27T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.079134 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.079566 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.079705 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.079843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.080015 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.183406 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.183468 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.183482 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.183527 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.183541 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.287544 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.287587 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.287597 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.287615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.287625 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.314621 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:04:55.660709719 +0000 UTC Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.391608 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.391955 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.392179 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.392359 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.392528 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.495830 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.495890 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.495903 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.495925 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.495940 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.585168 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.585253 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.585268 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.585317 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.585330 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: E0127 06:48:20.600985 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.605127 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.605205 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.605232 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.605266 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.605290 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: E0127 06:48:20.619968 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.625589 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.625653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.625663 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.625687 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.625701 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: E0127 06:48:20.646273 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.651050 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.651096 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.651108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.651129 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.651144 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: E0127 06:48:20.666908 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.671215 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.671258 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.671271 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.671293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.671306 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: E0127 06:48:20.687631 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:20 crc kubenswrapper[4729]: E0127 06:48:20.687788 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.690040 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.690090 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.690099 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.690115 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.690125 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.792611 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.792672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.792685 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.792708 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.792723 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.894511 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.894542 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.894552 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.894572 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.894585 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.996608 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.996644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.996655 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.996671 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:20 crc kubenswrapper[4729]: I0127 06:48:20.996684 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:20Z","lastTransitionTime":"2026-01-27T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.100763 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.101603 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.101681 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.101756 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.101824 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:21Z","lastTransitionTime":"2026-01-27T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.204277 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.204324 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.204335 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.204355 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.204366 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:21Z","lastTransitionTime":"2026-01-27T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.307052 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.307623 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.307702 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.307790 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.307858 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:21Z","lastTransitionTime":"2026-01-27T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.315209 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:59:08.2412196 +0000 UTC Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.361742 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.361840 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:21 crc kubenswrapper[4729]: E0127 06:48:21.361924 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:21 crc kubenswrapper[4729]: E0127 06:48:21.362054 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.361767 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:21 crc kubenswrapper[4729]: E0127 06:48:21.362362 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.362505 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:21 crc kubenswrapper[4729]: E0127 06:48:21.362763 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.410188 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.410223 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.410235 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.410254 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.410266 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:21Z","lastTransitionTime":"2026-01-27T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.511935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.511965 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.511973 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.512007 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.512018 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:21Z","lastTransitionTime":"2026-01-27T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.615553 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.616166 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.616323 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.616471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.616607 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:21Z","lastTransitionTime":"2026-01-27T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.718998 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.719732 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.719896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.720051 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.720225 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:21Z","lastTransitionTime":"2026-01-27T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.823266 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.823315 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.823329 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.823353 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.823368 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:21Z","lastTransitionTime":"2026-01-27T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.925974 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.926506 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.926610 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.926714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:21 crc kubenswrapper[4729]: I0127 06:48:21.926852 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:21Z","lastTransitionTime":"2026-01-27T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.030791 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.030838 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.030852 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.030873 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.030885 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:22Z","lastTransitionTime":"2026-01-27T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.136293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.136332 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.136364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.136383 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.136393 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:22Z","lastTransitionTime":"2026-01-27T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.238410 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.238474 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.238483 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.238500 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.238511 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:22Z","lastTransitionTime":"2026-01-27T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.316303 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:57:31.566949351 +0000 UTC Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.340642 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.340687 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.340697 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.340740 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.340756 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:22Z","lastTransitionTime":"2026-01-27T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.443002 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.443049 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.443063 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.443106 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.443123 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:22Z","lastTransitionTime":"2026-01-27T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.546467 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.546521 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.546538 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.546567 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.546586 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:22Z","lastTransitionTime":"2026-01-27T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.650651 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.650695 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.650711 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.650735 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.650754 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:22Z","lastTransitionTime":"2026-01-27T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.753992 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.754049 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.754103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.754136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.754160 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:22Z","lastTransitionTime":"2026-01-27T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.856272 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.856314 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.856325 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.856345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.856357 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:22Z","lastTransitionTime":"2026-01-27T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.958714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.958769 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.958780 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.958797 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:22 crc kubenswrapper[4729]: I0127 06:48:22.958807 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:22Z","lastTransitionTime":"2026-01-27T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.061779 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.061825 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.061837 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.061855 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.061867 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:23Z","lastTransitionTime":"2026-01-27T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.165153 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.165194 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.165205 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.165226 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.165237 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:23Z","lastTransitionTime":"2026-01-27T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.268799 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.268859 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.268880 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.268911 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.268931 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:23Z","lastTransitionTime":"2026-01-27T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.316883 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:36:12.247429182 +0000 UTC Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.362249 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.362306 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.362328 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.362432 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:23 crc kubenswrapper[4729]: E0127 06:48:23.362437 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:23 crc kubenswrapper[4729]: E0127 06:48:23.362570 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:23 crc kubenswrapper[4729]: E0127 06:48:23.362825 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:23 crc kubenswrapper[4729]: E0127 06:48:23.362890 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.371647 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.371702 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.371716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.371739 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.371755 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:23Z","lastTransitionTime":"2026-01-27T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.474393 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.474443 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.474457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.474476 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.474490 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:23Z","lastTransitionTime":"2026-01-27T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.577754 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.577808 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.577818 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.577840 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.577851 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:23Z","lastTransitionTime":"2026-01-27T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.681424 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.681478 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.681490 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.681516 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.681531 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:23Z","lastTransitionTime":"2026-01-27T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.783849 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.783925 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.783937 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.783958 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.783973 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:23Z","lastTransitionTime":"2026-01-27T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.886421 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.886466 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.886475 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.886494 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.886506 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:23Z","lastTransitionTime":"2026-01-27T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.990179 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.990420 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.990433 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.990457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:23 crc kubenswrapper[4729]: I0127 06:48:23.990466 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:23Z","lastTransitionTime":"2026-01-27T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.093785 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.093828 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.093839 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.093861 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.093872 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:24Z","lastTransitionTime":"2026-01-27T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.196292 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.196346 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.196360 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.196378 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.196392 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:24Z","lastTransitionTime":"2026-01-27T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.299011 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.299082 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.299092 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.299111 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.299123 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:24Z","lastTransitionTime":"2026-01-27T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.318052 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:49:18.082564428 +0000 UTC Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.372882 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.403020 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.403151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.403167 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.403189 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.403200 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:24Z","lastTransitionTime":"2026-01-27T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.506123 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.506190 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.506201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.506222 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.506234 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:24Z","lastTransitionTime":"2026-01-27T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.608620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.608671 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.608689 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.608709 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.608720 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:24Z","lastTransitionTime":"2026-01-27T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.711593 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.711644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.711659 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.711680 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.711692 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:24Z","lastTransitionTime":"2026-01-27T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.814305 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.814364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.814411 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.814437 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.814451 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:24Z","lastTransitionTime":"2026-01-27T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.916665 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.916705 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.916731 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.916750 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:24 crc kubenswrapper[4729]: I0127 06:48:24.916760 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:24Z","lastTransitionTime":"2026-01-27T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.019605 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.019663 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.019682 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.019709 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.019727 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:25Z","lastTransitionTime":"2026-01-27T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.122230 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.122291 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.122309 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.122338 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.122356 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:25Z","lastTransitionTime":"2026-01-27T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.225366 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.225437 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.225459 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.225493 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.225518 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:25Z","lastTransitionTime":"2026-01-27T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.319911 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:07:38.508477403 +0000 UTC Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.329150 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.329185 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.329195 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.329225 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.329237 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:25Z","lastTransitionTime":"2026-01-27T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.362492 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:25 crc kubenswrapper[4729]: E0127 06:48:25.362622 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.363479 4729 scope.go:117] "RemoveContainer" containerID="382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575" Jan 27 06:48:25 crc kubenswrapper[4729]: E0127 06:48:25.363625 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\"" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.363752 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:25 crc kubenswrapper[4729]: E0127 06:48:25.363811 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.364135 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:25 crc kubenswrapper[4729]: E0127 06:48:25.364187 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.364228 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:25 crc kubenswrapper[4729]: E0127 06:48:25.364277 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.432055 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.432149 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.432168 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.432195 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.432208 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:25Z","lastTransitionTime":"2026-01-27T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.534555 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.534626 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.534639 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.534667 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.534678 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:25Z","lastTransitionTime":"2026-01-27T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.539432 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:25 crc kubenswrapper[4729]: E0127 06:48:25.539635 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:48:25 crc kubenswrapper[4729]: E0127 06:48:25.539726 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs podName:2c156b30-d262-4fdc-a70b-eb1703422f01 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:57.539701806 +0000 UTC m=+102.606823159 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs") pod "network-metrics-daemon-xqs5z" (UID: "2c156b30-d262-4fdc-a70b-eb1703422f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.637760 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.637820 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.637832 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.637852 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.637864 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:25Z","lastTransitionTime":"2026-01-27T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.741033 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.741094 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.741104 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.741121 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.741131 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:25Z","lastTransitionTime":"2026-01-27T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.843533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.843601 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.843615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.843637 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.843652 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:25Z","lastTransitionTime":"2026-01-27T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.946644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.946703 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.946721 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.946747 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:25 crc kubenswrapper[4729]: I0127 06:48:25.946764 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:25Z","lastTransitionTime":"2026-01-27T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.050035 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.050102 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.050111 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.050130 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.050141 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:26Z","lastTransitionTime":"2026-01-27T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.152550 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.152600 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.152609 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.152639 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.152649 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:26Z","lastTransitionTime":"2026-01-27T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.255386 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.255444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.255458 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.255481 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.255496 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:26Z","lastTransitionTime":"2026-01-27T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.320932 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:20:39.4295245 +0000 UTC Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.358564 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.358624 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.358641 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.358670 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.358691 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:26Z","lastTransitionTime":"2026-01-27T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.374177 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.387828 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.400150 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.411657 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.422420 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.430928 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.442498 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.453789 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.461056 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.461103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.461113 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.461129 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.461138 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:26Z","lastTransitionTime":"2026-01-27T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.466006 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.481019 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.496648 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.506864 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.521581 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.545847 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"shift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nI0127 06:48:10.457512 6263 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 11.64µs\\\\nI0127 06:48:10.457519 6263 services_controller.go:356] Processing sync for service default/kubernetes for network=default\\\\nI0127 06:48:10.457524 6263 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457535 6263 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457545 6263 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 06:48:10.457552 6263 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 06:48:10.457558 6263 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457571 6263 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:48:10.457643 6263 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.563480 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.563526 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.563537 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.563557 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.563569 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:26Z","lastTransitionTime":"2026-01-27T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.576323 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.594494 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"726954f7-a525-4db7-83d1-d69678201315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c1e6b127f85f6a08268520e176a5015d40dd55b0a869d8a32ce068c44e4970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.611961 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.642281 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.666395 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.666447 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.666458 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.666478 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.666491 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:26Z","lastTransitionTime":"2026-01-27T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.769912 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.769969 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.769980 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.770000 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.770015 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:26Z","lastTransitionTime":"2026-01-27T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.873035 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.873092 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.873102 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.873121 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.873131 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:26Z","lastTransitionTime":"2026-01-27T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.975594 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.975631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.975642 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.975659 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:26 crc kubenswrapper[4729]: I0127 06:48:26.975672 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:26Z","lastTransitionTime":"2026-01-27T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.078626 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.078698 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.078711 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.078738 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.078753 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:27Z","lastTransitionTime":"2026-01-27T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.181761 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.182114 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.182221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.182308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.182387 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:27Z","lastTransitionTime":"2026-01-27T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.284589 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.284670 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.284682 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.284703 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.284717 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:27Z","lastTransitionTime":"2026-01-27T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.321780 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:03:06.235148954 +0000 UTC Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.362410 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.362476 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:27 crc kubenswrapper[4729]: E0127 06:48:27.362565 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.362413 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:27 crc kubenswrapper[4729]: E0127 06:48:27.362648 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.362421 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:27 crc kubenswrapper[4729]: E0127 06:48:27.362740 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:27 crc kubenswrapper[4729]: E0127 06:48:27.362856 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.387520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.387556 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.387567 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.387586 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.387597 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:27Z","lastTransitionTime":"2026-01-27T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.489915 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.490155 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.490270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.490344 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.490410 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:27Z","lastTransitionTime":"2026-01-27T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.592586 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.592642 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.592654 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.592674 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.592685 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:27Z","lastTransitionTime":"2026-01-27T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.695118 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.695154 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.695163 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.695178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.695188 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:27Z","lastTransitionTime":"2026-01-27T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.798052 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.798392 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.798581 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.798760 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.798915 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:27Z","lastTransitionTime":"2026-01-27T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.901834 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.902260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.902401 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.902544 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:27 crc kubenswrapper[4729]: I0127 06:48:27.902670 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:27Z","lastTransitionTime":"2026-01-27T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.006554 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.006620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.006641 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.006669 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.006688 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:28Z","lastTransitionTime":"2026-01-27T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.109833 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.109888 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.109905 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.110191 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.110227 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:28Z","lastTransitionTime":"2026-01-27T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.212770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.212805 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.212821 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.212844 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.212860 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:28Z","lastTransitionTime":"2026-01-27T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.316533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.316603 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.316622 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.316651 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.316669 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:28Z","lastTransitionTime":"2026-01-27T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.322790 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:00:38.910598048 +0000 UTC Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.419163 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.419239 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.419260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.419288 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.419307 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:28Z","lastTransitionTime":"2026-01-27T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.522973 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.523320 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.523472 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.523619 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.523754 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:28Z","lastTransitionTime":"2026-01-27T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.627184 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.627239 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.627254 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.627275 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.627292 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:28Z","lastTransitionTime":"2026-01-27T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.730687 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.730737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.730750 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.730773 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.730787 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:28Z","lastTransitionTime":"2026-01-27T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.833561 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.833590 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.833597 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.833612 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.833623 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:28Z","lastTransitionTime":"2026-01-27T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.936510 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.936568 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.936583 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.936605 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:28 crc kubenswrapper[4729]: I0127 06:48:28.936620 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:28Z","lastTransitionTime":"2026-01-27T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.040291 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.040376 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.040388 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.040420 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.040435 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:29Z","lastTransitionTime":"2026-01-27T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.143540 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.143587 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.143605 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.143630 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.143647 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:29Z","lastTransitionTime":"2026-01-27T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.247116 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.247178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.247189 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.247206 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.247217 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:29Z","lastTransitionTime":"2026-01-27T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.323426 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:32:56.513505425 +0000 UTC Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.350235 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.350308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.350327 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.350354 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.350377 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:29Z","lastTransitionTime":"2026-01-27T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.361843 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.361934 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.362012 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:29 crc kubenswrapper[4729]: E0127 06:48:29.362216 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:29 crc kubenswrapper[4729]: E0127 06:48:29.362350 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:29 crc kubenswrapper[4729]: E0127 06:48:29.362481 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.362706 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:29 crc kubenswrapper[4729]: E0127 06:48:29.362945 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.453757 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.454429 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.454678 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.455138 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.455339 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:29Z","lastTransitionTime":"2026-01-27T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.558673 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.558741 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.558764 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.558796 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.558818 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:29Z","lastTransitionTime":"2026-01-27T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.662172 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.662515 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.662703 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.662777 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.662836 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:29Z","lastTransitionTime":"2026-01-27T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.765938 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.766267 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.766390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.766453 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.766520 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:29Z","lastTransitionTime":"2026-01-27T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.869031 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.869420 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.869479 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.869537 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.869591 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:29Z","lastTransitionTime":"2026-01-27T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.972344 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.972764 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.972863 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.972965 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:29 crc kubenswrapper[4729]: I0127 06:48:29.973048 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:29Z","lastTransitionTime":"2026-01-27T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.076815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.076887 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.076906 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.076933 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.076951 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:30Z","lastTransitionTime":"2026-01-27T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.180691 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.180758 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.180776 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.180803 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.180822 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:30Z","lastTransitionTime":"2026-01-27T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.283848 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.283933 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.283950 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.283979 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.284002 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:30Z","lastTransitionTime":"2026-01-27T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.324394 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:04:05.52510792 +0000 UTC Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.387581 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.387634 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.387650 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.387676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.387695 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:30Z","lastTransitionTime":"2026-01-27T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.491697 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.491795 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.491812 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.491870 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.491891 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:30Z","lastTransitionTime":"2026-01-27T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.595309 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.595368 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.595391 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.595423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.595445 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:30Z","lastTransitionTime":"2026-01-27T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.698370 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.698420 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.698440 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.698474 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.698493 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:30Z","lastTransitionTime":"2026-01-27T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.801866 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.801926 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.801950 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.801979 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.801999 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:30Z","lastTransitionTime":"2026-01-27T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.906414 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.906486 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.906509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.906538 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:30 crc kubenswrapper[4729]: I0127 06:48:30.906560 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:30Z","lastTransitionTime":"2026-01-27T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.009707 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.009762 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.009782 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.009811 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.009831 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.114197 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.114247 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.114263 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.114287 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.114303 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.135389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.135448 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.135461 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.135483 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.135498 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: E0127 06:48:31.152840 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.169528 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.169869 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.169881 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.171938 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.172008 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: E0127 06:48:31.186047 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.190487 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.190514 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.190524 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.190544 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.190555 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: E0127 06:48:31.203918 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.207523 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.207569 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.207580 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.207599 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.207609 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: E0127 06:48:31.220670 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.223901 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.223922 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.223930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.223944 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.223953 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: E0127 06:48:31.237243 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:31 crc kubenswrapper[4729]: E0127 06:48:31.237398 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.239365 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.239397 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.239408 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.239430 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.239446 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.324645 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:18:09.398190572 +0000 UTC Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.341855 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.341886 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.341896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.341913 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.341927 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.361723 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.361895 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.362007 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:31 crc kubenswrapper[4729]: E0127 06:48:31.362009 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.362051 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:31 crc kubenswrapper[4729]: E0127 06:48:31.362251 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:31 crc kubenswrapper[4729]: E0127 06:48:31.362296 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:31 crc kubenswrapper[4729]: E0127 06:48:31.362437 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.445037 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.445128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.445141 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.445163 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.445178 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.548062 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.548126 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.548137 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.548154 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.548165 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.651352 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.651399 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.651410 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.651429 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.651443 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.754679 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.754719 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.754728 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.754746 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.754757 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.857735 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.857799 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.857814 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.857837 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.857853 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.962387 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.962466 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.962484 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.962509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:31 crc kubenswrapper[4729]: I0127 06:48:31.962523 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:31Z","lastTransitionTime":"2026-01-27T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.066680 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.066742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.066757 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.066782 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.066799 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:32Z","lastTransitionTime":"2026-01-27T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.169661 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.169703 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.169717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.169738 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.169751 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:32Z","lastTransitionTime":"2026-01-27T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.273580 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.273656 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.273674 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.273701 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.273718 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:32Z","lastTransitionTime":"2026-01-27T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.325334 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:15:20.002908018 +0000 UTC Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.376606 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.376672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.376686 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.376709 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.376724 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:32Z","lastTransitionTime":"2026-01-27T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.480228 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.480307 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.480328 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.480357 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.480383 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:32Z","lastTransitionTime":"2026-01-27T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.583389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.583469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.583481 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.583501 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.583512 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:32Z","lastTransitionTime":"2026-01-27T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.686613 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.686677 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.686694 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.686724 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.686745 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:32Z","lastTransitionTime":"2026-01-27T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.791062 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.791159 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.791176 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.791203 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.791226 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:32Z","lastTransitionTime":"2026-01-27T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.894245 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.894316 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.894338 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.894365 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.894380 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:32Z","lastTransitionTime":"2026-01-27T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.997505 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.997567 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.997581 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.997610 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:32 crc kubenswrapper[4729]: I0127 06:48:32.997626 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:32Z","lastTransitionTime":"2026-01-27T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.101571 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.101621 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.101639 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.101668 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.101686 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:33Z","lastTransitionTime":"2026-01-27T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.205706 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.205783 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.205795 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.205819 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.205836 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:33Z","lastTransitionTime":"2026-01-27T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.308477 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.308506 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.308536 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.308553 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.308562 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:33Z","lastTransitionTime":"2026-01-27T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.325903 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:15:35.159524605 +0000 UTC Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.361848 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.361885 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.361897 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:33 crc kubenswrapper[4729]: E0127 06:48:33.362015 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.362085 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:33 crc kubenswrapper[4729]: E0127 06:48:33.362130 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:33 crc kubenswrapper[4729]: E0127 06:48:33.362175 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:33 crc kubenswrapper[4729]: E0127 06:48:33.362216 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.411911 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.411972 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.411999 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.412031 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.412052 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:33Z","lastTransitionTime":"2026-01-27T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.516424 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.516505 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.516527 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.516555 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.516575 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:33Z","lastTransitionTime":"2026-01-27T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.620366 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.620416 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.620432 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.620457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.620474 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:33Z","lastTransitionTime":"2026-01-27T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.724536 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.724586 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.724599 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.724616 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.724629 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:33Z","lastTransitionTime":"2026-01-27T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.829369 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.829450 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.829468 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.829500 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.829520 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:33Z","lastTransitionTime":"2026-01-27T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.933270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.933337 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.933362 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.933391 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:33 crc kubenswrapper[4729]: I0127 06:48:33.933414 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:33Z","lastTransitionTime":"2026-01-27T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.037063 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.037173 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.037190 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.037217 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.037236 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:34Z","lastTransitionTime":"2026-01-27T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.140662 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.140726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.140741 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.140763 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.140778 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:34Z","lastTransitionTime":"2026-01-27T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.244193 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.244232 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.244243 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.244261 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.244273 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:34Z","lastTransitionTime":"2026-01-27T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.326537 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:39:37.929570927 +0000 UTC Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.347590 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.347675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.347695 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.347727 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.347744 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:34Z","lastTransitionTime":"2026-01-27T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.451409 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.451455 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.451470 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.451494 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.451506 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:34Z","lastTransitionTime":"2026-01-27T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.555045 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.555457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.555485 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.555510 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.555698 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:34Z","lastTransitionTime":"2026-01-27T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.659572 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.659617 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.659629 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.659651 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.659663 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:34Z","lastTransitionTime":"2026-01-27T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.762064 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.762167 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.762193 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.762226 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.762257 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:34Z","lastTransitionTime":"2026-01-27T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.868817 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.868858 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.868871 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.868889 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.868905 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:34Z","lastTransitionTime":"2026-01-27T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.972277 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.972361 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.972374 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.972417 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:34 crc kubenswrapper[4729]: I0127 06:48:34.972431 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:34Z","lastTransitionTime":"2026-01-27T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.076144 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.076209 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.076230 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.076258 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.076277 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:35Z","lastTransitionTime":"2026-01-27T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.179303 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.179411 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.179435 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.179527 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.179555 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:35Z","lastTransitionTime":"2026-01-27T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.282874 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.282955 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.282977 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.283012 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.283034 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:35Z","lastTransitionTime":"2026-01-27T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.327423 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:32:49.978464945 +0000 UTC Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.362120 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.362147 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.362232 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:35 crc kubenswrapper[4729]: E0127 06:48:35.362310 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.362328 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:35 crc kubenswrapper[4729]: E0127 06:48:35.362449 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:35 crc kubenswrapper[4729]: E0127 06:48:35.362508 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:35 crc kubenswrapper[4729]: E0127 06:48:35.362566 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.386651 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.386740 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.386760 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.386789 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.386809 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:35Z","lastTransitionTime":"2026-01-27T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.490502 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.490561 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.490579 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.490604 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.490621 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:35Z","lastTransitionTime":"2026-01-27T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.594780 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.594842 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.594862 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.594890 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.594909 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:35Z","lastTransitionTime":"2026-01-27T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.698615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.698771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.698798 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.698827 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.698850 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:35Z","lastTransitionTime":"2026-01-27T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.801896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.801948 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.801965 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.801991 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.802011 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:35Z","lastTransitionTime":"2026-01-27T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.905276 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.905352 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.905370 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.905397 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:35 crc kubenswrapper[4729]: I0127 06:48:35.905416 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:35Z","lastTransitionTime":"2026-01-27T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.009255 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.009311 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.009323 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.009344 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.009357 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:36Z","lastTransitionTime":"2026-01-27T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.112490 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.112563 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.112580 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.112609 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.112626 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:36Z","lastTransitionTime":"2026-01-27T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.215855 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.215906 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.215923 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.215952 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.215970 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:36Z","lastTransitionTime":"2026-01-27T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.318480 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.318531 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.318543 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.318563 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.318575 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:36Z","lastTransitionTime":"2026-01-27T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.327764 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:17:56.966245763 +0000 UTC Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.363229 4729 scope.go:117] "RemoveContainer" containerID="382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.378594 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.397517 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.415163 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.423488 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.423572 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.423590 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.423645 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.423664 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:36Z","lastTransitionTime":"2026-01-27T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.430766 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.450330 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.466741 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.487638 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.510145 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"shift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nI0127 06:48:10.457512 6263 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 11.64µs\\\\nI0127 06:48:10.457519 6263 services_controller.go:356] Processing sync for service default/kubernetes for network=default\\\\nI0127 06:48:10.457524 6263 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457535 6263 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457545 6263 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 06:48:10.457552 6263 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 06:48:10.457558 6263 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457571 6263 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:48:10.457643 6263 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.526474 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.526519 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.526530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.526551 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.526566 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:36Z","lastTransitionTime":"2026-01-27T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.529877 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.543907 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"726954f7-a525-4db7-83d1-d69678201315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c1e6b127f85f6a08268520e176a5015d40dd55b0a869d8a32ce068c44e4970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.561014 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.578419 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.591462 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.610003 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.624169 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.628714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.628758 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.628781 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.628803 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.628818 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:36Z","lastTransitionTime":"2026-01-27T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.638048 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.648547 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.659902 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.732993 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.733038 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.733049 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.733083 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.733096 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:36Z","lastTransitionTime":"2026-01-27T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.836334 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.836379 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.836388 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.836421 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.836431 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:36Z","lastTransitionTime":"2026-01-27T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.934445 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/2.log" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.939128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.939204 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.939228 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.939261 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.939284 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:36Z","lastTransitionTime":"2026-01-27T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.939790 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b"} Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.940716 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.959267 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:36 crc kubenswrapper[4729]: I0127 06:48:36.979105 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.002337 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:36Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.018695 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.038096 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.042113 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.042151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.042162 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.042178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.042189 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:37Z","lastTransitionTime":"2026-01-27T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.059314 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.080955 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.093907 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.111655 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.129170 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.145560 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.145609 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.145622 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.145647 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.145662 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:37Z","lastTransitionTime":"2026-01-27T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.154149 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.178435 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"shift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nI0127 06:48:10.457512 6263 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 11.64µs\\\\nI0127 06:48:10.457519 6263 services_controller.go:356] Processing sync for service default/kubernetes for network=default\\\\nI0127 06:48:10.457524 6263 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457535 6263 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457545 6263 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 06:48:10.457552 6263 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 06:48:10.457558 6263 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457571 6263 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:48:10.457643 6263 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.198030 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.212454 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"726954f7-a525-4db7-83d1-d69678201315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c1e6b127f85f6a08268520e176a5015d40dd55b0a869d8a32ce068c44e4970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.230170 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.243319 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.248182 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.248235 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.248246 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.248266 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.248278 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:37Z","lastTransitionTime":"2026-01-27T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.255642 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.267590 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.328528 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:37:56.407205839 +0000 UTC Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.350673 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.350726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.350739 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.350762 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.350775 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:37Z","lastTransitionTime":"2026-01-27T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.361975 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.361999 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.362015 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:37 crc kubenswrapper[4729]: E0127 06:48:37.362145 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:37 crc kubenswrapper[4729]: E0127 06:48:37.362247 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:37 crc kubenswrapper[4729]: E0127 06:48:37.362337 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.362261 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:37 crc kubenswrapper[4729]: E0127 06:48:37.362659 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.454344 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.454625 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.454737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.454869 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.454962 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:37Z","lastTransitionTime":"2026-01-27T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.558726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.558777 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.558789 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.558810 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.558823 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:37Z","lastTransitionTime":"2026-01-27T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.661949 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.662052 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.662109 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.662138 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.662156 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:37Z","lastTransitionTime":"2026-01-27T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.765673 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.765733 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.765743 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.765763 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.765777 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:37Z","lastTransitionTime":"2026-01-27T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.868943 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.868984 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.868994 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.869010 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.869021 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:37Z","lastTransitionTime":"2026-01-27T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.945967 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/3.log" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.947247 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/2.log" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.950311 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" exitCode=1 Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.950357 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.950402 4729 scope.go:117] "RemoveContainer" containerID="382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.951269 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:48:37 crc kubenswrapper[4729]: E0127 06:48:37.951531 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\"" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.970498 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.975415 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.975483 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.975497 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.975516 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.975526 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:37Z","lastTransitionTime":"2026-01-27T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:37 crc kubenswrapper[4729]: I0127 06:48:37.991619 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.006624 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.018467 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.034870 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.048705 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.061428 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.072749 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.078399 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.078440 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.078453 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.078474 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.078488 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:38Z","lastTransitionTime":"2026-01-27T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.085698 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.100786 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.124566 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ad321164e39807a7635a5d9d7d3753ce6de0808bc62b316e9e49b4d7d7575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"message\\\":\\\"shift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nI0127 06:48:10.457512 6263 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 11.64µs\\\\nI0127 06:48:10.457519 6263 services_controller.go:356] Processing sync for service default/kubernetes for network=default\\\\nI0127 06:48:10.457524 6263 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457535 6263 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457545 6263 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 06:48:10.457552 6263 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 06:48:10.457558 6263 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 06:48:10.457571 6263 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:48:10.457643 6263 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:37Z\\\",\\\"message\\\":\\\"bility:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075328fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 webhook-server},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.5.254,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.254],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0127 06:48:37.564861 6618 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.141185 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.153635 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"726954f7-a525-4db7-83d1-d69678201315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c1e6b127f85f6a08268520e176a5015d40dd55b0a869d8a32ce068c44e4970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.168819 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.181657 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.182107 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.182270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.182423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.182560 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:38Z","lastTransitionTime":"2026-01-27T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.182758 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.198745 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.211890 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.228421 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.287059 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.287133 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.287151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.287176 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.287198 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:38Z","lastTransitionTime":"2026-01-27T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.329576 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:18:26.094799876 +0000 UTC Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.390372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.390445 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.390471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.390504 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.390529 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:38Z","lastTransitionTime":"2026-01-27T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.494362 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.494414 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.494423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.494444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.494455 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:38Z","lastTransitionTime":"2026-01-27T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.596947 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.596996 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.597009 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.597025 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.597036 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:38Z","lastTransitionTime":"2026-01-27T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.700426 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.700491 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.700502 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.700516 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.700528 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:38Z","lastTransitionTime":"2026-01-27T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.803822 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.803904 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.803928 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.803960 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.803995 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:38Z","lastTransitionTime":"2026-01-27T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.906482 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.906558 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.906583 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.906612 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.906636 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:38Z","lastTransitionTime":"2026-01-27T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.958176 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/3.log" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.964143 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:48:38 crc kubenswrapper[4729]: E0127 06:48:38.964322 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\"" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" Jan 27 06:48:38 crc kubenswrapper[4729]: I0127 06:48:38.990976 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.010944 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.011635 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.011732 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.011754 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.013241 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.013336 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:39Z","lastTransitionTime":"2026-01-27T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.030559 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.049032 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.072229 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.089737 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.111654 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.116381 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.116435 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.116455 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.116480 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.116498 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:39Z","lastTransitionTime":"2026-01-27T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.126129 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.141739 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.157194 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"726954f7-a525-4db7-83d1-d69678201315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c1e6b127f85f6a08268520e176a5015d40dd55b0a869d8a32ce068c44e4970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.170804 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.190603 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.208161 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.219042 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.219304 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.219470 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.219604 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.219725 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:39Z","lastTransitionTime":"2026-01-27T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.221380 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.241738 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:37Z\\\",\\\"message\\\":\\\"bility:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075328fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 webhook-server},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.5.254,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.254],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0127 06:48:37.564861 6618 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.256635 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.267568 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.281470 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:39Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.322571 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.322854 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.322987 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.323183 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.323331 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:39Z","lastTransitionTime":"2026-01-27T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.329761 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:10:32.923248318 +0000 UTC Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.362250 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.362250 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.362292 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.362332 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:39 crc kubenswrapper[4729]: E0127 06:48:39.363016 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:39 crc kubenswrapper[4729]: E0127 06:48:39.363334 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:39 crc kubenswrapper[4729]: E0127 06:48:39.363430 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:39 crc kubenswrapper[4729]: E0127 06:48:39.363227 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.426528 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.426581 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.426596 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.426619 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.426633 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:39Z","lastTransitionTime":"2026-01-27T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.529888 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.529919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.529928 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.529945 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.529955 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:39Z","lastTransitionTime":"2026-01-27T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.632990 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.633038 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.633054 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.633108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.633126 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:39Z","lastTransitionTime":"2026-01-27T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.736036 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.736104 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.736115 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.736137 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.736148 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:39Z","lastTransitionTime":"2026-01-27T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.840056 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.840131 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.840144 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.840166 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.840177 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:39Z","lastTransitionTime":"2026-01-27T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.942771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.942854 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.942873 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.942902 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:39 crc kubenswrapper[4729]: I0127 06:48:39.942925 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:39Z","lastTransitionTime":"2026-01-27T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.045896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.045964 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.045982 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.046013 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.046033 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:40Z","lastTransitionTime":"2026-01-27T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.149060 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.149130 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.149140 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.149161 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.149174 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:40Z","lastTransitionTime":"2026-01-27T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.252631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.252810 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.252825 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.252843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.253211 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:40Z","lastTransitionTime":"2026-01-27T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.330569 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:16:11.535511389 +0000 UTC Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.356580 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.356641 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.356652 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.356675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.356687 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:40Z","lastTransitionTime":"2026-01-27T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.459212 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.459533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.459628 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.459717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.459778 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:40Z","lastTransitionTime":"2026-01-27T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.562757 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.563033 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.563136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.563259 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.563337 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:40Z","lastTransitionTime":"2026-01-27T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.667372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.667637 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.667708 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.667829 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.667910 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:40Z","lastTransitionTime":"2026-01-27T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.770926 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.770994 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.771017 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.771045 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.771066 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:40Z","lastTransitionTime":"2026-01-27T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.874384 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.874631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.874730 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.874804 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.874867 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:40Z","lastTransitionTime":"2026-01-27T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.977330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.977379 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.977416 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.977442 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:40 crc kubenswrapper[4729]: I0127 06:48:40.977458 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:40Z","lastTransitionTime":"2026-01-27T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.080441 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.080511 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.080528 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.080555 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.080575 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.184693 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.184759 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.184776 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.184802 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.184819 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.290261 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.290363 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.290383 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.290410 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.290427 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.332403 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:39:15.599997766 +0000 UTC Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.362267 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.362308 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.362456 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:41 crc kubenswrapper[4729]: E0127 06:48:41.362637 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.362308 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:41 crc kubenswrapper[4729]: E0127 06:48:41.362969 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:41 crc kubenswrapper[4729]: E0127 06:48:41.362874 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:41 crc kubenswrapper[4729]: E0127 06:48:41.363567 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.393983 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.394024 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.394036 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.394056 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.394096 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.496426 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.496477 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.496489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.496507 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.496517 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.500947 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.501013 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.501033 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.501059 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.501104 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: E0127 06:48:41.515632 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:41Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.521486 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.521550 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.521574 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.521636 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.521660 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: E0127 06:48:41.539580 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:41Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.544120 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.544157 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.544168 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.544185 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.544195 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: E0127 06:48:41.559279 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:41Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.564667 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.564883 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.564962 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.565123 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.565231 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: E0127 06:48:41.580060 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:41Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.584987 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.585603 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.585718 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.585840 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.585955 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: E0127 06:48:41.603253 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:41Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:41 crc kubenswrapper[4729]: E0127 06:48:41.603420 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.605026 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.605209 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.605287 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.605374 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.605458 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.707834 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.707886 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.707897 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.707916 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.707928 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.811509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.811881 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.812103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.812309 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.812504 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.916186 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.916597 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.916693 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.916809 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.916896 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:41Z","lastTransitionTime":"2026-01-27T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.976045 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45zq7_15e81784-44b6-45c7-a893-4b38366a1b5e/kube-multus/0.log" Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.976413 4729 generic.go:334] "Generic (PLEG): container finished" podID="15e81784-44b6-45c7-a893-4b38366a1b5e" containerID="0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2" exitCode=1 Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.976537 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45zq7" event={"ID":"15e81784-44b6-45c7-a893-4b38366a1b5e","Type":"ContainerDied","Data":"0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2"} Jan 27 06:48:41 crc kubenswrapper[4729]: I0127 06:48:41.977228 4729 scope.go:117] "RemoveContainer" containerID="0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.003228 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.020758 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.021348 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.021439 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.021523 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.021560 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.021592 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:42Z","lastTransitionTime":"2026-01-27T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.037834 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.054642 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.068476 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.085140 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.106638 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.124013 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.124064 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.124087 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.124109 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.124123 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:42Z","lastTransitionTime":"2026-01-27T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.124899 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.138487 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.157250 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.168553 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"726954f7-a525-4db7-83d1-d69678201315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c1e6b127f85f6a08268520e176a5015d40dd55b0a869d8a32ce068c44e4970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.182567 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.195263 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.208403 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.224386 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"2026-01-27T06:47:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_29f72435-b5e1-41ac-8659-8d5ca3efe56a\\\\n2026-01-27T06:47:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_29f72435-b5e1-41ac-8659-8d5ca3efe56a to /host/opt/cni/bin/\\\\n2026-01-27T06:47:56Z [verbose] multus-daemon started\\\\n2026-01-27T06:47:56Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:48:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.226589 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.226669 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.226684 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.226706 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.226723 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:42Z","lastTransitionTime":"2026-01-27T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.248684 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:37Z\\\",\\\"message\\\":\\\"bility:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075328fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 webhook-server},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.5.254,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.254],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0127 06:48:37.564861 6618 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.262800 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.275922 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:42Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.329603 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.329639 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.329648 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.329663 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.329674 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:42Z","lastTransitionTime":"2026-01-27T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.332791 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:16:49.22341186 +0000 UTC Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.432556 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.432607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.432622 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.432643 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.433046 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:42Z","lastTransitionTime":"2026-01-27T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.536796 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.536848 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.536864 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.536889 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.536906 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:42Z","lastTransitionTime":"2026-01-27T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.640706 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.640758 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.640770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.640789 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.640801 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:42Z","lastTransitionTime":"2026-01-27T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.744110 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.744181 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.744204 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.744240 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.744264 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:42Z","lastTransitionTime":"2026-01-27T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.847372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.847434 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.847449 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.847470 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.847484 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:42Z","lastTransitionTime":"2026-01-27T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.950954 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.951023 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.951052 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.951096 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.951109 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:42Z","lastTransitionTime":"2026-01-27T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.984159 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45zq7_15e81784-44b6-45c7-a893-4b38366a1b5e/kube-multus/0.log" Jan 27 06:48:42 crc kubenswrapper[4729]: I0127 06:48:42.984236 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45zq7" event={"ID":"15e81784-44b6-45c7-a893-4b38366a1b5e","Type":"ContainerStarted","Data":"b5f9b60cf226acea6d347a1512ed26a090d6de28dbfeececf0afc5ea16d58e39"} Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.004492 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.026753 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.043203 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.054335 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.054416 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.054467 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.054492 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.054540 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:43Z","lastTransitionTime":"2026-01-27T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.065356 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f9b60cf226acea6d347a1512ed26a090d6de28dbfeececf0afc5ea16d58e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"2026-01-27T06:47:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_29f72435-b5e1-41ac-8659-8d5ca3efe56a\\\\n2026-01-27T06:47:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_29f72435-b5e1-41ac-8659-8d5ca3efe56a to /host/opt/cni/bin/\\\\n2026-01-27T06:47:56Z [verbose] multus-daemon started\\\\n2026-01-27T06:47:56Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:48:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.095551 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:37Z\\\",\\\"message\\\":\\\"bility:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075328fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 webhook-server},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.5.254,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.254],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0127 06:48:37.564861 6618 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.116286 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.142225 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"726954f7-a525-4db7-83d1-d69678201315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c1e6b127f85f6a08268520e176a5015d40dd55b0a869d8a32ce068c44e4970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.155294 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.155552 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156330 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:47.156299364 +0000 UTC m=+152.223420627 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.156404 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.156437 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.156467 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.156499 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156607 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156649 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:49:47.156640096 +0000 UTC m=+152.223761359 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156827 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156845 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156871 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:49:47.156864604 +0000 UTC m=+152.223985857 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156850 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156891 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156894 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156917 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156927 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156917 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:49:47.156912595 +0000 UTC m=+152.224033858 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.156962 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:49:47.156953817 +0000 UTC m=+152.224075080 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.157648 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.157679 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.157690 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.157709 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.157722 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:43Z","lastTransitionTime":"2026-01-27T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.170591 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.188214 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.199538 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.212466 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.225450 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.241440 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.251474 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.261160 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.261215 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.261231 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.261257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.261274 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:43Z","lastTransitionTime":"2026-01-27T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.267042 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.280514 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.294279 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.333336 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 20:54:23.782673206 +0000 UTC Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.362165 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.362216 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.362302 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.362344 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.362981 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.363138 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.363274 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:43 crc kubenswrapper[4729]: E0127 06:48:43.363372 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.364092 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.364210 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.364278 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.364358 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.364442 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:43Z","lastTransitionTime":"2026-01-27T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.467157 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.467215 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.467235 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.467266 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.467289 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:43Z","lastTransitionTime":"2026-01-27T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.569962 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.570012 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.570021 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.570037 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.570049 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:43Z","lastTransitionTime":"2026-01-27T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.675145 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.675211 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.675233 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.675260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.675280 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:43Z","lastTransitionTime":"2026-01-27T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.778178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.778259 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.778281 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.778312 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.778334 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:43Z","lastTransitionTime":"2026-01-27T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.881363 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.881427 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.881443 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.881470 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.881487 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:43Z","lastTransitionTime":"2026-01-27T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.985570 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.985621 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.985689 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.985718 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:43 crc kubenswrapper[4729]: I0127 06:48:43.985803 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:43Z","lastTransitionTime":"2026-01-27T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.090103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.090154 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.090163 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.090199 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.090210 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:44Z","lastTransitionTime":"2026-01-27T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.193183 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.193247 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.193264 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.193289 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.193306 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:44Z","lastTransitionTime":"2026-01-27T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.296316 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.296354 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.296363 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.296380 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.296396 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:44Z","lastTransitionTime":"2026-01-27T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.334578 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:02:30.588797716 +0000 UTC Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.399250 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.399330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.399357 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.399394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.399418 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:44Z","lastTransitionTime":"2026-01-27T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.502367 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.502407 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.502418 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.502436 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.502448 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:44Z","lastTransitionTime":"2026-01-27T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.605138 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.605191 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.605201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.605219 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.605229 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:44Z","lastTransitionTime":"2026-01-27T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.707726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.707797 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.707815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.707842 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.707859 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:44Z","lastTransitionTime":"2026-01-27T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.811477 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.811549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.811569 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.811598 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.811615 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:44Z","lastTransitionTime":"2026-01-27T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.913892 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.913947 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.913964 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.913992 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:44 crc kubenswrapper[4729]: I0127 06:48:44.914010 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:44Z","lastTransitionTime":"2026-01-27T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.017553 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.017741 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.017763 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.017790 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.017811 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:45Z","lastTransitionTime":"2026-01-27T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.120543 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.120582 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.120592 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.120610 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.120621 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:45Z","lastTransitionTime":"2026-01-27T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.224971 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.225046 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.225066 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.225120 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.225140 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:45Z","lastTransitionTime":"2026-01-27T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.329197 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.329266 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.329285 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.329311 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.329330 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:45Z","lastTransitionTime":"2026-01-27T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.335277 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:09:34.610129277 +0000 UTC Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.361654 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.361796 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:45 crc kubenswrapper[4729]: E0127 06:48:45.361862 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.361671 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:45 crc kubenswrapper[4729]: E0127 06:48:45.362018 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:45 crc kubenswrapper[4729]: E0127 06:48:45.362203 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.362235 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:45 crc kubenswrapper[4729]: E0127 06:48:45.362348 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.432854 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.432924 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.432947 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.432980 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.433010 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:45Z","lastTransitionTime":"2026-01-27T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.537481 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.537554 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.537566 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.537593 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.537606 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:45Z","lastTransitionTime":"2026-01-27T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.640548 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.640587 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.640628 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.640646 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.640657 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:45Z","lastTransitionTime":"2026-01-27T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.744125 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.744192 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.744207 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.744230 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.744245 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:45Z","lastTransitionTime":"2026-01-27T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.847684 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.847735 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.847749 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.847770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.847781 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:45Z","lastTransitionTime":"2026-01-27T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.951246 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.951311 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.951328 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.951358 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:45 crc kubenswrapper[4729]: I0127 06:48:45.951384 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:45Z","lastTransitionTime":"2026-01-27T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.054299 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.054354 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.054369 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.054390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.054403 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:46Z","lastTransitionTime":"2026-01-27T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.157815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.157884 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.157903 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.157931 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.157949 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:46Z","lastTransitionTime":"2026-01-27T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.260399 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.260447 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.260457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.260478 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.260491 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:46Z","lastTransitionTime":"2026-01-27T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.336262 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:34:38.545541717 +0000 UTC Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.364882 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.365100 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.365762 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.365937 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.366021 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:46Z","lastTransitionTime":"2026-01-27T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.383568 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.397436 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.414963 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.428912 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.447110 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.461193 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.469131 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.469329 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.469445 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.469539 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.469608 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:46Z","lastTransitionTime":"2026-01-27T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.482060 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.498206 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.514679 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.529453 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f9b60cf226acea6d347a1512ed26a090d6de28dbfeececf0afc5ea16d58e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"2026-01-27T06:47:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_29f72435-b5e1-41ac-8659-8d5ca3efe56a\\\\n2026-01-27T06:47:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_29f72435-b5e1-41ac-8659-8d5ca3efe56a to /host/opt/cni/bin/\\\\n2026-01-27T06:47:56Z [verbose] multus-daemon started\\\\n2026-01-27T06:47:56Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:48:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.559484 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:37Z\\\",\\\"message\\\":\\\"bility:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075328fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 webhook-server},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.5.254,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.254],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0127 06:48:37.564861 6618 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.572380 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.572421 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.572433 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.572452 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.572462 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:46Z","lastTransitionTime":"2026-01-27T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.579210 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.595930 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"726954f7-a525-4db7-83d1-d69678201315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c1e6b127f85f6a08268520e176a5015d40dd55b0a869d8a32ce068c44e4970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.614843 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.630589 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.643562 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.657405 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.673010 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.678211 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.678258 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.678269 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.678290 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.678304 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:46Z","lastTransitionTime":"2026-01-27T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.782498 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.782549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.782560 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.782577 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.782591 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:46Z","lastTransitionTime":"2026-01-27T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.885787 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.885839 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.885850 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.885870 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.885882 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:46Z","lastTransitionTime":"2026-01-27T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.988785 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.988855 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.988866 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.988889 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:46 crc kubenswrapper[4729]: I0127 06:48:46.988901 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:46Z","lastTransitionTime":"2026-01-27T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.092479 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.092552 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.092573 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.092603 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.092620 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:47Z","lastTransitionTime":"2026-01-27T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.195293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.195367 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.195389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.195423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.195446 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:47Z","lastTransitionTime":"2026-01-27T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.299011 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.299172 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.299186 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.299206 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.299217 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:47Z","lastTransitionTime":"2026-01-27T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.336861 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:08:30.186597283 +0000 UTC Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.362311 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.362322 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.362346 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.362508 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:47 crc kubenswrapper[4729]: E0127 06:48:47.362599 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:47 crc kubenswrapper[4729]: E0127 06:48:47.362980 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:47 crc kubenswrapper[4729]: E0127 06:48:47.363037 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:47 crc kubenswrapper[4729]: E0127 06:48:47.362843 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.403722 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.403783 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.403794 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.403811 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.403823 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:47Z","lastTransitionTime":"2026-01-27T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.507479 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.507531 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.507549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.507572 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.507589 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:47Z","lastTransitionTime":"2026-01-27T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.612179 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.612257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.612275 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.612301 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.612325 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:47Z","lastTransitionTime":"2026-01-27T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.715066 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.715130 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.715140 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.715156 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.715167 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:47Z","lastTransitionTime":"2026-01-27T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.818432 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.818506 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.818525 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.818554 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.818573 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:47Z","lastTransitionTime":"2026-01-27T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.921296 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.921341 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.921353 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.921372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:47 crc kubenswrapper[4729]: I0127 06:48:47.921387 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:47Z","lastTransitionTime":"2026-01-27T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.024504 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.024585 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.024599 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.024622 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.024634 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:48Z","lastTransitionTime":"2026-01-27T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.127872 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.127942 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.127963 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.127993 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.128012 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:48Z","lastTransitionTime":"2026-01-27T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.231298 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.231360 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.231372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.231394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.231410 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:48Z","lastTransitionTime":"2026-01-27T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.334645 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.334687 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.334698 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.334714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.334733 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:48Z","lastTransitionTime":"2026-01-27T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.338107 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:48:28.764418883 +0000 UTC Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.437389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.437434 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.437445 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.437463 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.437475 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:48Z","lastTransitionTime":"2026-01-27T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.541066 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.541151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.541169 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.541194 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.541213 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:48Z","lastTransitionTime":"2026-01-27T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.644014 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.644061 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.644098 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.644117 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.644129 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:48Z","lastTransitionTime":"2026-01-27T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.747394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.747460 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.747478 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.747506 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.747523 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:48Z","lastTransitionTime":"2026-01-27T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.850734 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.850776 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.850788 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.850805 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.850815 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:48Z","lastTransitionTime":"2026-01-27T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.953806 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.953846 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.953856 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.953872 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:48 crc kubenswrapper[4729]: I0127 06:48:48.953884 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:48Z","lastTransitionTime":"2026-01-27T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.057614 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.057667 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.057684 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.057709 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.057727 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:49Z","lastTransitionTime":"2026-01-27T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.160518 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.160574 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.160585 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.160606 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.160621 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:49Z","lastTransitionTime":"2026-01-27T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.263855 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.263902 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.263913 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.263930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.263941 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:49Z","lastTransitionTime":"2026-01-27T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.338306 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:58:55.619754085 +0000 UTC Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.361795 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.361853 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:49 crc kubenswrapper[4729]: E0127 06:48:49.361985 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.362009 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.362038 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:49 crc kubenswrapper[4729]: E0127 06:48:49.362304 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:49 crc kubenswrapper[4729]: E0127 06:48:49.362740 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:49 crc kubenswrapper[4729]: E0127 06:48:49.362646 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.367499 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.367530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.367539 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.367554 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.367568 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:49Z","lastTransitionTime":"2026-01-27T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.471239 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.471358 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.471375 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.471402 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.471419 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:49Z","lastTransitionTime":"2026-01-27T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.575175 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.575270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.575283 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.575301 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.575717 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:49Z","lastTransitionTime":"2026-01-27T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.679629 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.679693 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.679716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.679790 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.679817 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:49Z","lastTransitionTime":"2026-01-27T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.783144 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.783200 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.783213 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.783242 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.783258 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:49Z","lastTransitionTime":"2026-01-27T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.886051 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.886110 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.886124 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.886142 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.886152 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:49Z","lastTransitionTime":"2026-01-27T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.988691 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.988742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.988752 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.988770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:49 crc kubenswrapper[4729]: I0127 06:48:49.988782 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:49Z","lastTransitionTime":"2026-01-27T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.092338 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.092410 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.092431 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.092465 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.092489 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:50Z","lastTransitionTime":"2026-01-27T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.195950 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.195991 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.195999 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.196013 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.196023 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:50Z","lastTransitionTime":"2026-01-27T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.299134 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.299272 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.299302 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.299334 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.299357 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:50Z","lastTransitionTime":"2026-01-27T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.338983 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:38:25.599244276 +0000 UTC Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.402404 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.402477 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.402486 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.402504 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.402514 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:50Z","lastTransitionTime":"2026-01-27T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.505758 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.505870 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.505885 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.505907 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.505921 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:50Z","lastTransitionTime":"2026-01-27T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.609846 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.609915 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.609935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.609960 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.609982 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:50Z","lastTransitionTime":"2026-01-27T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.713984 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.714037 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.714054 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.714112 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.714136 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:50Z","lastTransitionTime":"2026-01-27T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.816950 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.817343 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.817436 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.817507 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.817587 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:50Z","lastTransitionTime":"2026-01-27T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.921495 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.921942 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.922010 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.922124 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:50 crc kubenswrapper[4729]: I0127 06:48:50.922222 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:50Z","lastTransitionTime":"2026-01-27T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.025965 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.026024 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.026046 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.026116 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.026563 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.130183 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.130240 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.130259 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.130284 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.130301 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.233574 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.233619 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.233630 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.233649 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.233660 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.336924 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.336967 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.336978 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.336998 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.337011 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.339146 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:44:03.606285727 +0000 UTC Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.361609 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.361670 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.361685 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.361623 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:51 crc kubenswrapper[4729]: E0127 06:48:51.361813 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:51 crc kubenswrapper[4729]: E0127 06:48:51.361955 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:51 crc kubenswrapper[4729]: E0127 06:48:51.361993 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:51 crc kubenswrapper[4729]: E0127 06:48:51.362265 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.440854 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.440915 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.440928 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.440947 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.440958 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.544150 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.544248 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.544266 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.544294 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.544313 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.647948 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.648006 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.648017 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.648037 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.648049 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.752032 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.752109 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.752121 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.752142 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.752157 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.781569 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.781637 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.781654 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.781678 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.781695 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: E0127 06:48:51.802627 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.810499 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.810570 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.810590 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.810619 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.810640 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: E0127 06:48:51.829998 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.835472 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.835557 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.835582 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.835620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.835646 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: E0127 06:48:51.854988 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.860760 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.860811 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.860824 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.860851 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.860865 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: E0127 06:48:51.876868 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.882038 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.882103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.882121 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.882145 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.882162 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:51 crc kubenswrapper[4729]: E0127 06:48:51.904627 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"feb6d084-d54a-4546-b726-6ec1313128f5\\\",\\\"systemUUID\\\":\\\"01d3f01e-8834-4e65-95ae-95ce1cb627e3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:51 crc kubenswrapper[4729]: E0127 06:48:51.904960 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.907551 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.907594 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.907611 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.907632 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:51 crc kubenswrapper[4729]: I0127 06:48:51.907650 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:51Z","lastTransitionTime":"2026-01-27T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.011152 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.011199 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.011214 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.011234 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.011265 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:52Z","lastTransitionTime":"2026-01-27T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.119538 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.119595 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.119611 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.119638 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.119657 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:52Z","lastTransitionTime":"2026-01-27T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.223006 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.223130 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.223152 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.223188 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.223211 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:52Z","lastTransitionTime":"2026-01-27T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.326620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.326664 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.326674 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.326692 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.326703 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:52Z","lastTransitionTime":"2026-01-27T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.340111 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:26:57.108822012 +0000 UTC Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.398145 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.430161 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.430235 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.430254 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.430282 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.430301 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:52Z","lastTransitionTime":"2026-01-27T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.533672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.533722 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.533738 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.533760 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.533779 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:52Z","lastTransitionTime":"2026-01-27T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.637858 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.637917 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.637941 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.637971 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.637995 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:52Z","lastTransitionTime":"2026-01-27T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.740937 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.740990 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.741001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.741021 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.741034 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:52Z","lastTransitionTime":"2026-01-27T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.844091 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.844133 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.844143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.844160 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.844171 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:52Z","lastTransitionTime":"2026-01-27T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.947051 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.947172 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.947193 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.947220 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:52 crc kubenswrapper[4729]: I0127 06:48:52.947240 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:52Z","lastTransitionTime":"2026-01-27T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.050548 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.050768 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.050893 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.050930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.050952 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:53Z","lastTransitionTime":"2026-01-27T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.154475 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.154511 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.154521 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.154539 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.154556 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:53Z","lastTransitionTime":"2026-01-27T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.257879 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.257924 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.257935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.257956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.257967 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:53Z","lastTransitionTime":"2026-01-27T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.341003 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:52:42.297887745 +0000 UTC Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.360875 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.360919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.360933 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.360956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.360968 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:53Z","lastTransitionTime":"2026-01-27T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.361525 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.361603 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.361674 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:53 crc kubenswrapper[4729]: E0127 06:48:53.361825 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.361912 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:53 crc kubenswrapper[4729]: E0127 06:48:53.362194 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:53 crc kubenswrapper[4729]: E0127 06:48:53.362376 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:53 crc kubenswrapper[4729]: E0127 06:48:53.362477 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.363165 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:48:53 crc kubenswrapper[4729]: E0127 06:48:53.363338 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\"" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.464166 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.464216 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.464232 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.464286 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.464303 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:53Z","lastTransitionTime":"2026-01-27T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.567231 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.567289 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.567300 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.567322 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.567335 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:53Z","lastTransitionTime":"2026-01-27T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.670535 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.670590 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.670603 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.670623 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.670635 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:53Z","lastTransitionTime":"2026-01-27T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.773868 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.773932 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.773954 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.774001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.774030 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:53Z","lastTransitionTime":"2026-01-27T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.876584 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.876634 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.876663 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.876689 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.876706 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:53Z","lastTransitionTime":"2026-01-27T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.979606 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.979647 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.979658 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.979676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:53 crc kubenswrapper[4729]: I0127 06:48:53.979688 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:53Z","lastTransitionTime":"2026-01-27T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.083738 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.083835 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.083861 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.083930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.083957 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:54Z","lastTransitionTime":"2026-01-27T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.186738 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.186786 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.186795 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.186813 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.186825 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:54Z","lastTransitionTime":"2026-01-27T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.290285 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.290371 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.290387 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.290415 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.290433 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:54Z","lastTransitionTime":"2026-01-27T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.341596 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:48:28.406338019 +0000 UTC Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.394027 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.394105 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.394123 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.394149 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.394168 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:54Z","lastTransitionTime":"2026-01-27T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.497757 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.497836 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.497858 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.497891 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.497911 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:54Z","lastTransitionTime":"2026-01-27T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.601881 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.601962 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.601980 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.602008 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.602026 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:54Z","lastTransitionTime":"2026-01-27T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.705844 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.706022 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.706050 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.706153 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.706177 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:54Z","lastTransitionTime":"2026-01-27T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.808649 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.808726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.808744 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.808772 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.808791 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:54Z","lastTransitionTime":"2026-01-27T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.911473 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.911530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.911549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.911577 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:54 crc kubenswrapper[4729]: I0127 06:48:54.911594 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:54Z","lastTransitionTime":"2026-01-27T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.015577 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.015632 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.015650 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.015676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.015693 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:55Z","lastTransitionTime":"2026-01-27T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.118866 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.118904 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.118916 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.118934 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.118947 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:55Z","lastTransitionTime":"2026-01-27T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.222974 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.223053 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.223116 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.223146 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.223165 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:55Z","lastTransitionTime":"2026-01-27T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.326560 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.326663 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.326685 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.326714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.326732 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:55Z","lastTransitionTime":"2026-01-27T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.342111 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:15:23.428944992 +0000 UTC Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.362637 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.362664 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.362676 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.362902 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:55 crc kubenswrapper[4729]: E0127 06:48:55.362915 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:55 crc kubenswrapper[4729]: E0127 06:48:55.363120 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:55 crc kubenswrapper[4729]: E0127 06:48:55.363504 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:55 crc kubenswrapper[4729]: E0127 06:48:55.363824 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.431665 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.431985 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.432098 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.432191 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.432298 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:55Z","lastTransitionTime":"2026-01-27T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.536515 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.536586 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.536605 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.537127 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.537183 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:55Z","lastTransitionTime":"2026-01-27T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.640590 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.640671 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.640692 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.640726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.640749 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:55Z","lastTransitionTime":"2026-01-27T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.744873 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.745017 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.745042 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.745098 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.745116 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:55Z","lastTransitionTime":"2026-01-27T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.847540 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.847610 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.847620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.847656 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.847669 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:55Z","lastTransitionTime":"2026-01-27T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.951536 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.951600 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.951618 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.951648 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:55 crc kubenswrapper[4729]: I0127 06:48:55.951666 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:55Z","lastTransitionTime":"2026-01-27T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.055400 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.055786 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.055808 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.055844 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.055867 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:56Z","lastTransitionTime":"2026-01-27T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.159374 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.159452 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.159476 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.159511 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.159534 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:56Z","lastTransitionTime":"2026-01-27T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.262419 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.262461 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.262471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.262488 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.262498 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:56Z","lastTransitionTime":"2026-01-27T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.343418 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:16:13.724441064 +0000 UTC Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.366210 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.366284 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.366293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.366309 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.366320 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:56Z","lastTransitionTime":"2026-01-27T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.377419 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b6f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f924f11-f70f-436a-a7e5-fb7d0feeabc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8032efe2410f6e1b78c166b56e66c76e16e2e8071e21a6c6a7f4e30f64b6bead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mpjkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b6f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.394701 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b8949c5-4022-49a3-af0d-2580921d3b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff4d2d23725ffe3f2f235415437a0a260719b5830b70441fc24dfeb35796a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f3cd0d7655752be024e0dabaa0cc89de65f7f65a4b3dff097da84b5b90dec7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57f31d20ebd778cc69fb15f2ccf82c95e4d7a0c577db2fc6504a567985362987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33ff1063020baec79be8bd390fba6ccb8b85546d7d0e06d11ba2cfa86cb1aabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3368781fee4cc61df12d91ad769f8a6996bfb9b726fd0b1d206f6095e7f82536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4eb17ce04b67e385c0223210cf2f1551bf3a09fc3d15f301c86c99d9f0767104\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05f4e1f7350ef0301f12010d2d6d3c40f38231609850df629d088b97c048026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj47b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cmwl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.416845 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3ce9174-ae55-4e97-8dd6-96c11ac10b59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:47:32.418130 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:47:32.418749 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1942557989/tls.crt::/tmp/serving-cert-1942557989/tls.key\\\\\\\"\\\\nI0127 06:47:38.357964 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:47:38.364901 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:47:38.364928 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:47:38.364954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:47:38.364958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:47:38.372671 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:47:38.373361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:47:38.373378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:47:38.373382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:47:38.373387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:47:38.373392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:47:38.373670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:47:38.375709 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.436598 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8352645-9977-41fa-84eb-7a67b391fa2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce838944124b7a39c8d0c39307003f4a9149a1cb1589b7f1b45fb2ecf47a8461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7afac591df13c31cd9f615883bffa12ec74e8b05802654982b56bdf167227de6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae19e48f9ae7ddba7dc50621ae3b4aba9b877e6d8c748adfb0209094d461c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.451110 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"526865eb-4ab7-486d-925d-6b4583d6b86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94892a12e760b8b79f54e89819fd3fd9c72c7a64b2ef01709d0f2a69626d19a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-695bm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5x25t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.470027 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.470134 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.470159 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.470196 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.470216 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:56Z","lastTransitionTime":"2026-01-27T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.470961 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c156b30-d262-4fdc-a70b-eb1703422f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fg22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqs5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.496931 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09568bd52efea4fe9da414a59602a10978fc641abf79a90617d1e1d3800ddd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.517729 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.533887 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.551243 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-czw8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f0a2a1b-0118-4509-b664-8bf0c6b22cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85cffe46c0fff38c72f98c0572be8b3de85115fab5d1c6db42e0a94160c02d97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bczgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-czw8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.573286 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66219e0fd27a36a3e9a92141e4941a8b6a74a4eb598edadc24480393b8031c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e563f3ec72db1488d36aa217672c31a279ccfa2d43a2bbf8cb8790a4b8bb002c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.574188 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.574345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.574451 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.574564 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.574675 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:56Z","lastTransitionTime":"2026-01-27T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.589300 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ffa8267d0b3f85764ffa86a659c44ca154ebe347e62aa96d089debab8a67e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.606518 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-45zq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15e81784-44b6-45c7-a893-4b38366a1b5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f9b60cf226acea6d347a1512ed26a090d6de28dbfeececf0afc5ea16d58e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:41Z\\\",\\\"message\\\":\\\"2026-01-27T06:47:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_29f72435-b5e1-41ac-8659-8d5ca3efe56a\\\\n2026-01-27T06:47:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_29f72435-b5e1-41ac-8659-8d5ca3efe56a to /host/opt/cni/bin/\\\\n2026-01-27T06:47:56Z [verbose] multus-daemon started\\\\n2026-01-27T06:47:56Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:48:41Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:48:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4m7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-45zq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.632556 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4dbf50d-949f-4203-873a-7ced1d5a5015\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:48:37Z\\\",\\\"message\\\":\\\"bility:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075328fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 webhook-server},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.5.254,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.254],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0127 06:48:37.564861 6618 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:48:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgfn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-95wgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.650497 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5024cbe-3737-4112-85f9-29bcd70f8a5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:48:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd453b2b9c02305936313eb325e96a4cc4fbf4e9cf72ffde89c41fa9060bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fdbc2befb109569aa1990b03ad731087c1207ef7eee1048f05fef14acc5d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560d82c66c2f87bcfd1aeaa8ebda05ab0fcd510a07dd94f0f54e5e494be3405a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9032b3a65d79bf83ca7602b3786d26e0de8582a9ad84adcd52d668b485d1015\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.665505 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"726954f7-a525-4db7-83d1-d69678201315\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c1e6b127f85f6a08268520e176a5015d40dd55b0a869d8a32ce068c44e4970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221502b4079a77f4740c19a3831c9c609cb10903978e54573daf66ec3dabef4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.677441 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.677508 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.677520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.677539 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.677550 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:56Z","lastTransitionTime":"2026-01-27T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.688453 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"375fb4b8-e29d-4a47-85b2-1367979837ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64eeadb25974d0bc31e425e8881f88ee6f8bc69ff9809ba87ccf2955d94257f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dd3d386709f79224ab5ce8f89c26652fdd0d8fff22ba29524283bbef7b7732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e950cdd897eb4f787289d5bfb2f86e4e7296248d5ae7d6d592f37e50efa34f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2448d7ceb31f8033b5c9df76642b4fbb1abfbfda1324f07fa4b785833210bcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ae0a7d076c7b15eabb691bd1fa321e40165619e37bbfe45c38f0a1f22a147e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57749e66de7a960bba8b8031d34e4e72233776723a14046318c4e81fc3d4276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57749e66de7a960bba8b8031d34e4e72233776723a14046318c4e81fc3d4276a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98733521bf34ecc19bb4335bf60d50f41385866f65ecf0a892ed802ecca5510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98733521bf34ecc19bb4335bf60d50f41385866f65ecf0a892ed802ecca5510c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9846ccd5229bb7d39b0ca981703d2f38d16f97ca621f4da3ade8fba8db6dad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9846ccd5229bb7d39b0ca981703d2f38d16f97ca621f4da3ade8fba8db6dad97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:47:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.705623 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.719143 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756bbc18-5ad0-4bbe-a612-30720a9f5fe3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a7925de155dcece4c381a8150412056ebd03a3b49abb9dea90d47a2182c3828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19115f9d32c532c9d3744d4f271ff03ab3e2da9520ac7c37c93d2d6589dadf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tk4lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjp6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.779864 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.779920 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.779938 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.779963 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.779982 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:56Z","lastTransitionTime":"2026-01-27T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.882244 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.882278 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.882288 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.882307 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.882337 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:56Z","lastTransitionTime":"2026-01-27T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.985037 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.985179 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.985201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.985233 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:56 crc kubenswrapper[4729]: I0127 06:48:56.985254 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:56Z","lastTransitionTime":"2026-01-27T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.087813 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.087875 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.087888 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.087911 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.087927 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:57Z","lastTransitionTime":"2026-01-27T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.190980 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.191048 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.191103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.191135 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.191153 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:57Z","lastTransitionTime":"2026-01-27T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.293001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.293063 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.293130 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.293156 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.293173 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:57Z","lastTransitionTime":"2026-01-27T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.343846 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:40:47.80756829 +0000 UTC Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.362319 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.362392 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.362320 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.362535 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:57 crc kubenswrapper[4729]: E0127 06:48:57.362525 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:57 crc kubenswrapper[4729]: E0127 06:48:57.362655 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:57 crc kubenswrapper[4729]: E0127 06:48:57.362791 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:57 crc kubenswrapper[4729]: E0127 06:48:57.363191 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.396143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.396196 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.396252 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.396286 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.396309 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:57Z","lastTransitionTime":"2026-01-27T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.499545 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.499616 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.499633 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.499668 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.499686 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:57Z","lastTransitionTime":"2026-01-27T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.603437 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.603512 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.603533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.603561 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.603581 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:57Z","lastTransitionTime":"2026-01-27T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.618206 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:57 crc kubenswrapper[4729]: E0127 06:48:57.618461 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:48:57 crc kubenswrapper[4729]: E0127 06:48:57.618605 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs podName:2c156b30-d262-4fdc-a70b-eb1703422f01 nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.618568965 +0000 UTC m=+166.685690268 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs") pod "network-metrics-daemon-xqs5z" (UID: "2c156b30-d262-4fdc-a70b-eb1703422f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.707176 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.707230 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.707243 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.707264 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.707276 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:57Z","lastTransitionTime":"2026-01-27T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.811191 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.811231 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.811241 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.811258 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.811268 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:57Z","lastTransitionTime":"2026-01-27T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.914257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.914332 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.914357 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.914389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:57 crc kubenswrapper[4729]: I0127 06:48:57.914411 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:57Z","lastTransitionTime":"2026-01-27T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.017606 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.017650 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.017662 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.017680 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.017690 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:58Z","lastTransitionTime":"2026-01-27T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.120813 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.120876 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.120888 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.120904 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.120915 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:58Z","lastTransitionTime":"2026-01-27T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.224060 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.224129 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.224142 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.224164 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.224180 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:58Z","lastTransitionTime":"2026-01-27T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.327051 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.327115 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.327124 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.327142 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.327152 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:58Z","lastTransitionTime":"2026-01-27T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.344309 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:58:23.390680835 +0000 UTC Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.430093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.430143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.430156 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.430176 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.430193 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:58Z","lastTransitionTime":"2026-01-27T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.533330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.533389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.533406 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.533433 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.533450 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:58Z","lastTransitionTime":"2026-01-27T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.636494 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.636555 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.636568 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.636590 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.636607 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:58Z","lastTransitionTime":"2026-01-27T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.740526 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.740601 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.740631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.740655 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.740672 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:58Z","lastTransitionTime":"2026-01-27T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.844016 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.844092 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.844106 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.844127 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.844139 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:58Z","lastTransitionTime":"2026-01-27T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.947273 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.947324 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.947334 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.947357 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:58 crc kubenswrapper[4729]: I0127 06:48:58.947371 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:58Z","lastTransitionTime":"2026-01-27T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.050224 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.050278 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.050289 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.050310 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.050324 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:59Z","lastTransitionTime":"2026-01-27T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.153134 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.153202 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.153220 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.153245 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.153268 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:59Z","lastTransitionTime":"2026-01-27T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.256723 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.256795 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.256819 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.256855 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.256876 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:59Z","lastTransitionTime":"2026-01-27T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.344686 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:50:01.807976517 +0000 UTC Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.360723 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.360807 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.360825 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.360851 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.360868 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:59Z","lastTransitionTime":"2026-01-27T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.361738 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:59 crc kubenswrapper[4729]: E0127 06:48:59.361903 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.361973 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.362209 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:59 crc kubenswrapper[4729]: E0127 06:48:59.362215 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.362275 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:48:59 crc kubenswrapper[4729]: E0127 06:48:59.362370 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:59 crc kubenswrapper[4729]: E0127 06:48:59.362463 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.464856 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.464960 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.464990 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.465013 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.465024 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:59Z","lastTransitionTime":"2026-01-27T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.568906 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.568960 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.568975 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.568997 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.569008 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:59Z","lastTransitionTime":"2026-01-27T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.671793 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.671850 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.671860 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.671882 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.671895 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:59Z","lastTransitionTime":"2026-01-27T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.775539 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.775594 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.775607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.775630 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.775647 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:59Z","lastTransitionTime":"2026-01-27T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.879540 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.879610 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.879632 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.879660 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.879679 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:59Z","lastTransitionTime":"2026-01-27T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.982131 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.982182 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.982199 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.982221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:59 crc kubenswrapper[4729]: I0127 06:48:59.982237 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:59Z","lastTransitionTime":"2026-01-27T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.085038 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.085108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.085120 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.085141 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.085156 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:00Z","lastTransitionTime":"2026-01-27T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.187729 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.187807 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.187831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.187863 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.187889 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:00Z","lastTransitionTime":"2026-01-27T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.291163 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.291205 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.291216 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.291234 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.291246 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:00Z","lastTransitionTime":"2026-01-27T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.345480 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:03:31.143090127 +0000 UTC Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.394530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.394595 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.394615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.394643 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.394663 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:00Z","lastTransitionTime":"2026-01-27T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.498270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.498335 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.498348 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.498366 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.498379 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:00Z","lastTransitionTime":"2026-01-27T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.601991 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.602046 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.602054 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.602113 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.602125 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:00Z","lastTransitionTime":"2026-01-27T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.704897 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.704959 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.704969 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.704987 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.705199 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:00Z","lastTransitionTime":"2026-01-27T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.807837 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.807906 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.807918 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.807945 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.807956 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:00Z","lastTransitionTime":"2026-01-27T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.911479 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.911513 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.911521 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.911536 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:00 crc kubenswrapper[4729]: I0127 06:49:00.911544 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:00Z","lastTransitionTime":"2026-01-27T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.014526 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.014570 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.014585 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.014601 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.014610 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:01Z","lastTransitionTime":"2026-01-27T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.117230 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.117279 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.117290 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.117309 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.117321 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:01Z","lastTransitionTime":"2026-01-27T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.219882 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.219932 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.219944 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.219961 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.219974 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:01Z","lastTransitionTime":"2026-01-27T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.322594 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.322674 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.322706 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.322737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.322758 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:01Z","lastTransitionTime":"2026-01-27T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.346051 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:32:35.57420236 +0000 UTC Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.362642 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.362704 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.362657 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.362797 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:01 crc kubenswrapper[4729]: E0127 06:49:01.362904 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:01 crc kubenswrapper[4729]: E0127 06:49:01.363128 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:01 crc kubenswrapper[4729]: E0127 06:49:01.363216 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:01 crc kubenswrapper[4729]: E0127 06:49:01.363421 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.426218 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.426284 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.426295 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.426311 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.426320 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:01Z","lastTransitionTime":"2026-01-27T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.529509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.529554 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.529564 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.529583 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.529596 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:01Z","lastTransitionTime":"2026-01-27T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.632447 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.632504 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.632512 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.632531 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.632545 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:01Z","lastTransitionTime":"2026-01-27T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.735267 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.735329 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.735340 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.735360 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.735371 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:01Z","lastTransitionTime":"2026-01-27T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.837880 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.837936 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.837952 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.837973 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.837986 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:01Z","lastTransitionTime":"2026-01-27T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.940564 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.941284 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.941328 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.941355 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:01 crc kubenswrapper[4729]: I0127 06:49:01.941377 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:01Z","lastTransitionTime":"2026-01-27T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.044644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.044705 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.044722 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.044748 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.044767 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:02Z","lastTransitionTime":"2026-01-27T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.147544 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.147593 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.147616 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.147639 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.147652 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:02Z","lastTransitionTime":"2026-01-27T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.205459 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.205755 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.205837 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.205917 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.206013 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:49:02Z","lastTransitionTime":"2026-01-27T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.275377 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg"] Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.276265 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.281353 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.281910 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.282891 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.287359 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.347387 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:25:45.171316194 +0000 UTC Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.347455 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.359007 4729 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.367372 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-b6f5d" podStartSLOduration=83.367351315 podStartE2EDuration="1m23.367351315s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.314190454 +0000 UTC m=+107.381311757" watchObservedRunningTime="2026-01-27 06:49:02.367351315 +0000 UTC m=+107.434472578" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.374811 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30ddf7f5-501d-44ed-858e-87e30ab1d892-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.374879 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/30ddf7f5-501d-44ed-858e-87e30ab1d892-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.374901 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/30ddf7f5-501d-44ed-858e-87e30ab1d892-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.375147 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30ddf7f5-501d-44ed-858e-87e30ab1d892-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.375197 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30ddf7f5-501d-44ed-858e-87e30ab1d892-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.394132 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=84.394108992 podStartE2EDuration="1m24.394108992s" podCreationTimestamp="2026-01-27 06:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.392694064 +0000 UTC m=+107.459815327" watchObservedRunningTime="2026-01-27 06:49:02.394108992 +0000 UTC m=+107.461230255" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.394638 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cmwl2" podStartSLOduration=83.3946305 podStartE2EDuration="1m23.3946305s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.367459819 +0000 UTC m=+107.434581112" watchObservedRunningTime="2026-01-27 06:49:02.3946305 +0000 UTC m=+107.461751763" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.426728 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=83.426703824 podStartE2EDuration="1m23.426703824s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.413088558 +0000 UTC m=+107.480209821" watchObservedRunningTime="2026-01-27 06:49:02.426703824 +0000 UTC m=+107.493825087" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.427540 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podStartSLOduration=83.427534242 podStartE2EDuration="1m23.427534242s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.425241186 +0000 UTC m=+107.492362459" watchObservedRunningTime="2026-01-27 06:49:02.427534242 +0000 UTC m=+107.494655505" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.475796 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/30ddf7f5-501d-44ed-858e-87e30ab1d892-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.475855 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/30ddf7f5-501d-44ed-858e-87e30ab1d892-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.475908 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30ddf7f5-501d-44ed-858e-87e30ab1d892-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.475933 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30ddf7f5-501d-44ed-858e-87e30ab1d892-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.475972 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30ddf7f5-501d-44ed-858e-87e30ab1d892-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.475969 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/30ddf7f5-501d-44ed-858e-87e30ab1d892-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.476043 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/30ddf7f5-501d-44ed-858e-87e30ab1d892-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.477161 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30ddf7f5-501d-44ed-858e-87e30ab1d892-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.497996 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30ddf7f5-501d-44ed-858e-87e30ab1d892-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.503710 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30ddf7f5-501d-44ed-858e-87e30ab1d892-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tn2tg\" (UID: \"30ddf7f5-501d-44ed-858e-87e30ab1d892\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.529018 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-czw8g" podStartSLOduration=83.528994602 podStartE2EDuration="1m23.528994602s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.506785328 +0000 UTC m=+107.573906591" watchObservedRunningTime="2026-01-27 06:49:02.528994602 +0000 UTC m=+107.596115865" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.543056 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=52.543036873 podStartE2EDuration="52.543036873s" podCreationTimestamp="2026-01-27 06:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.542187834 +0000 UTC m=+107.609309097" watchObservedRunningTime="2026-01-27 06:49:02.543036873 +0000 UTC m=+107.610158136" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.583924 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=38.583870461 podStartE2EDuration="38.583870461s" podCreationTimestamp="2026-01-27 06:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.554999193 +0000 UTC m=+107.622120456" watchObservedRunningTime="2026-01-27 06:49:02.583870461 +0000 UTC m=+107.650991724" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.597412 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.600331 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=10.600309962 podStartE2EDuration="10.600309962s" podCreationTimestamp="2026-01-27 06:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.583518869 +0000 UTC m=+107.650640152" watchObservedRunningTime="2026-01-27 06:49:02.600309962 +0000 UTC m=+107.667431225" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.684662 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjp6x" podStartSLOduration=83.684641408 podStartE2EDuration="1m23.684641408s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.684331327 +0000 UTC m=+107.751452600" watchObservedRunningTime="2026-01-27 06:49:02.684641408 +0000 UTC m=+107.751762671" Jan 27 06:49:02 crc kubenswrapper[4729]: I0127 06:49:02.685545 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-45zq7" podStartSLOduration=83.685540808 podStartE2EDuration="1m23.685540808s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:02.669226252 +0000 UTC m=+107.736347535" watchObservedRunningTime="2026-01-27 06:49:02.685540808 +0000 UTC m=+107.752662071" Jan 27 06:49:03 crc kubenswrapper[4729]: I0127 06:49:03.064234 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" event={"ID":"30ddf7f5-501d-44ed-858e-87e30ab1d892","Type":"ContainerStarted","Data":"11d49860b75db098183336bc2bfa0c57535eca5393ce3894a14a07cbf1f072ab"} Jan 27 06:49:03 crc kubenswrapper[4729]: I0127 06:49:03.064310 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" event={"ID":"30ddf7f5-501d-44ed-858e-87e30ab1d892","Type":"ContainerStarted","Data":"dca019036b21a5dbb55c592cdf8349660b37b7da493bd34782d6c9af665a6a81"} Jan 27 06:49:03 crc kubenswrapper[4729]: I0127 06:49:03.362518 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:03 crc kubenswrapper[4729]: I0127 06:49:03.362518 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:03 crc kubenswrapper[4729]: E0127 06:49:03.362734 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:03 crc kubenswrapper[4729]: I0127 06:49:03.362526 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:03 crc kubenswrapper[4729]: E0127 06:49:03.362819 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:03 crc kubenswrapper[4729]: I0127 06:49:03.362556 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:03 crc kubenswrapper[4729]: E0127 06:49:03.362887 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:03 crc kubenswrapper[4729]: E0127 06:49:03.363116 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:04 crc kubenswrapper[4729]: I0127 06:49:04.362950 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:49:04 crc kubenswrapper[4729]: E0127 06:49:04.363201 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\"" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" Jan 27 06:49:05 crc kubenswrapper[4729]: I0127 06:49:05.361865 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:05 crc kubenswrapper[4729]: I0127 06:49:05.361923 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:05 crc kubenswrapper[4729]: I0127 06:49:05.361923 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:05 crc kubenswrapper[4729]: I0127 06:49:05.362050 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:05 crc kubenswrapper[4729]: E0127 06:49:05.362189 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:05 crc kubenswrapper[4729]: E0127 06:49:05.362302 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:05 crc kubenswrapper[4729]: E0127 06:49:05.362466 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:05 crc kubenswrapper[4729]: E0127 06:49:05.362553 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:07 crc kubenswrapper[4729]: I0127 06:49:07.362347 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:07 crc kubenswrapper[4729]: I0127 06:49:07.362397 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:07 crc kubenswrapper[4729]: I0127 06:49:07.362387 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:07 crc kubenswrapper[4729]: I0127 06:49:07.362347 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:07 crc kubenswrapper[4729]: E0127 06:49:07.362531 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:07 crc kubenswrapper[4729]: E0127 06:49:07.362673 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:07 crc kubenswrapper[4729]: E0127 06:49:07.362810 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:07 crc kubenswrapper[4729]: E0127 06:49:07.362939 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:09 crc kubenswrapper[4729]: I0127 06:49:09.361596 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:09 crc kubenswrapper[4729]: E0127 06:49:09.362994 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:09 crc kubenswrapper[4729]: I0127 06:49:09.362864 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:09 crc kubenswrapper[4729]: E0127 06:49:09.363371 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:09 crc kubenswrapper[4729]: I0127 06:49:09.362812 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:09 crc kubenswrapper[4729]: I0127 06:49:09.363215 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:09 crc kubenswrapper[4729]: E0127 06:49:09.363686 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:09 crc kubenswrapper[4729]: E0127 06:49:09.363895 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:11 crc kubenswrapper[4729]: I0127 06:49:11.361903 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:11 crc kubenswrapper[4729]: I0127 06:49:11.361935 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:11 crc kubenswrapper[4729]: I0127 06:49:11.361946 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:11 crc kubenswrapper[4729]: E0127 06:49:11.362098 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:11 crc kubenswrapper[4729]: E0127 06:49:11.362194 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:11 crc kubenswrapper[4729]: E0127 06:49:11.362447 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:11 crc kubenswrapper[4729]: I0127 06:49:11.363452 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:11 crc kubenswrapper[4729]: E0127 06:49:11.363727 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:13 crc kubenswrapper[4729]: I0127 06:49:13.362039 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:13 crc kubenswrapper[4729]: I0127 06:49:13.362192 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:13 crc kubenswrapper[4729]: I0127 06:49:13.362048 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:13 crc kubenswrapper[4729]: E0127 06:49:13.362279 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:13 crc kubenswrapper[4729]: I0127 06:49:13.362108 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:13 crc kubenswrapper[4729]: E0127 06:49:13.362403 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:13 crc kubenswrapper[4729]: E0127 06:49:13.362615 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:13 crc kubenswrapper[4729]: E0127 06:49:13.362694 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:15 crc kubenswrapper[4729]: I0127 06:49:15.362059 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:15 crc kubenswrapper[4729]: I0127 06:49:15.362161 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:15 crc kubenswrapper[4729]: I0127 06:49:15.362182 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:15 crc kubenswrapper[4729]: I0127 06:49:15.362059 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:15 crc kubenswrapper[4729]: E0127 06:49:15.362291 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:15 crc kubenswrapper[4729]: E0127 06:49:15.362380 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:15 crc kubenswrapper[4729]: E0127 06:49:15.362460 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:15 crc kubenswrapper[4729]: E0127 06:49:15.362569 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:16 crc kubenswrapper[4729]: E0127 06:49:16.305246 4729 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 06:49:16 crc kubenswrapper[4729]: I0127 06:49:16.366530 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:49:16 crc kubenswrapper[4729]: E0127 06:49:16.366926 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-95wgz_openshift-ovn-kubernetes(f4dbf50d-949f-4203-873a-7ced1d5a5015)\"" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" Jan 27 06:49:17 crc kubenswrapper[4729]: E0127 06:49:17.334053 4729 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:49:17 crc kubenswrapper[4729]: I0127 06:49:17.361611 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:17 crc kubenswrapper[4729]: I0127 06:49:17.361858 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:17 crc kubenswrapper[4729]: I0127 06:49:17.361606 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:17 crc kubenswrapper[4729]: I0127 06:49:17.361663 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:17 crc kubenswrapper[4729]: E0127 06:49:17.362144 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:17 crc kubenswrapper[4729]: E0127 06:49:17.362271 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:17 crc kubenswrapper[4729]: E0127 06:49:17.362564 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:17 crc kubenswrapper[4729]: E0127 06:49:17.362598 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:19 crc kubenswrapper[4729]: I0127 06:49:19.362491 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:19 crc kubenswrapper[4729]: I0127 06:49:19.362555 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:19 crc kubenswrapper[4729]: E0127 06:49:19.362687 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:19 crc kubenswrapper[4729]: E0127 06:49:19.362878 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:19 crc kubenswrapper[4729]: I0127 06:49:19.363169 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:19 crc kubenswrapper[4729]: E0127 06:49:19.363312 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:19 crc kubenswrapper[4729]: I0127 06:49:19.363335 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:19 crc kubenswrapper[4729]: E0127 06:49:19.363646 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:21 crc kubenswrapper[4729]: I0127 06:49:21.362525 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:21 crc kubenswrapper[4729]: I0127 06:49:21.362593 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:21 crc kubenswrapper[4729]: I0127 06:49:21.362633 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:21 crc kubenswrapper[4729]: I0127 06:49:21.362625 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:21 crc kubenswrapper[4729]: E0127 06:49:21.362856 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:21 crc kubenswrapper[4729]: E0127 06:49:21.362994 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:21 crc kubenswrapper[4729]: E0127 06:49:21.363148 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:21 crc kubenswrapper[4729]: E0127 06:49:21.363328 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:22 crc kubenswrapper[4729]: E0127 06:49:22.336601 4729 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:49:23 crc kubenswrapper[4729]: I0127 06:49:23.362485 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:23 crc kubenswrapper[4729]: E0127 06:49:23.362849 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:23 crc kubenswrapper[4729]: I0127 06:49:23.363317 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:23 crc kubenswrapper[4729]: I0127 06:49:23.363415 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:23 crc kubenswrapper[4729]: I0127 06:49:23.363535 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:23 crc kubenswrapper[4729]: E0127 06:49:23.363582 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:23 crc kubenswrapper[4729]: E0127 06:49:23.363838 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:23 crc kubenswrapper[4729]: E0127 06:49:23.363964 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:25 crc kubenswrapper[4729]: I0127 06:49:25.362511 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:25 crc kubenswrapper[4729]: I0127 06:49:25.362611 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:25 crc kubenswrapper[4729]: E0127 06:49:25.362683 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:25 crc kubenswrapper[4729]: I0127 06:49:25.362610 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:25 crc kubenswrapper[4729]: E0127 06:49:25.362778 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:25 crc kubenswrapper[4729]: I0127 06:49:25.362818 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:25 crc kubenswrapper[4729]: E0127 06:49:25.362937 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:25 crc kubenswrapper[4729]: E0127 06:49:25.363007 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:27 crc kubenswrapper[4729]: E0127 06:49:27.338513 4729 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:49:27 crc kubenswrapper[4729]: I0127 06:49:27.361560 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:27 crc kubenswrapper[4729]: E0127 06:49:27.361768 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:27 crc kubenswrapper[4729]: I0127 06:49:27.362026 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:27 crc kubenswrapper[4729]: E0127 06:49:27.362145 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:27 crc kubenswrapper[4729]: I0127 06:49:27.362351 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:27 crc kubenswrapper[4729]: E0127 06:49:27.362547 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:27 crc kubenswrapper[4729]: I0127 06:49:27.363027 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:27 crc kubenswrapper[4729]: E0127 06:49:27.363137 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:27 crc kubenswrapper[4729]: I0127 06:49:27.363704 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.163276 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/3.log" Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.165905 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerStarted","Data":"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf"} Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.166506 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.167262 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45zq7_15e81784-44b6-45c7-a893-4b38366a1b5e/kube-multus/1.log" Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.167654 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45zq7_15e81784-44b6-45c7-a893-4b38366a1b5e/kube-multus/0.log" Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.167698 4729 generic.go:334] "Generic (PLEG): container finished" podID="15e81784-44b6-45c7-a893-4b38366a1b5e" containerID="b5f9b60cf226acea6d347a1512ed26a090d6de28dbfeececf0afc5ea16d58e39" exitCode=1 Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.167728 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45zq7" event={"ID":"15e81784-44b6-45c7-a893-4b38366a1b5e","Type":"ContainerDied","Data":"b5f9b60cf226acea6d347a1512ed26a090d6de28dbfeececf0afc5ea16d58e39"} Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.167756 4729 scope.go:117] "RemoveContainer" containerID="0d89d6341923a0d0b8ab3493f2b1d354da6d263735b53400d227aa0667c192b2" Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.168165 4729 scope.go:117] "RemoveContainer" containerID="b5f9b60cf226acea6d347a1512ed26a090d6de28dbfeececf0afc5ea16d58e39" Jan 27 06:49:28 crc kubenswrapper[4729]: E0127 06:49:28.168344 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-45zq7_openshift-multus(15e81784-44b6-45c7-a893-4b38366a1b5e)\"" pod="openshift-multus/multus-45zq7" podUID="15e81784-44b6-45c7-a893-4b38366a1b5e" Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.198192 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tn2tg" podStartSLOduration=109.198172657 podStartE2EDuration="1m49.198172657s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:03.086745132 +0000 UTC m=+108.153866465" watchObservedRunningTime="2026-01-27 06:49:28.198172657 +0000 UTC m=+133.265293920" Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.198530 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podStartSLOduration=109.198525388 podStartE2EDuration="1m49.198525388s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:28.197506756 +0000 UTC m=+133.264628039" watchObservedRunningTime="2026-01-27 06:49:28.198525388 +0000 UTC m=+133.265646651" Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.274717 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xqs5z"] Jan 27 06:49:28 crc kubenswrapper[4729]: I0127 06:49:28.274877 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:28 crc kubenswrapper[4729]: E0127 06:49:28.274992 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:29 crc kubenswrapper[4729]: I0127 06:49:29.173659 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45zq7_15e81784-44b6-45c7-a893-4b38366a1b5e/kube-multus/1.log" Jan 27 06:49:29 crc kubenswrapper[4729]: I0127 06:49:29.361610 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:29 crc kubenswrapper[4729]: I0127 06:49:29.361676 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:29 crc kubenswrapper[4729]: I0127 06:49:29.361634 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:29 crc kubenswrapper[4729]: E0127 06:49:29.361834 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:29 crc kubenswrapper[4729]: E0127 06:49:29.361954 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:29 crc kubenswrapper[4729]: E0127 06:49:29.362183 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:30 crc kubenswrapper[4729]: I0127 06:49:30.361805 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:30 crc kubenswrapper[4729]: E0127 06:49:30.361994 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:31 crc kubenswrapper[4729]: I0127 06:49:31.361806 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:31 crc kubenswrapper[4729]: I0127 06:49:31.361840 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:31 crc kubenswrapper[4729]: I0127 06:49:31.361840 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:31 crc kubenswrapper[4729]: E0127 06:49:31.361986 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:31 crc kubenswrapper[4729]: E0127 06:49:31.362148 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:31 crc kubenswrapper[4729]: E0127 06:49:31.362373 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:32 crc kubenswrapper[4729]: E0127 06:49:32.340353 4729 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:49:32 crc kubenswrapper[4729]: I0127 06:49:32.362463 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:32 crc kubenswrapper[4729]: E0127 06:49:32.362640 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:33 crc kubenswrapper[4729]: I0127 06:49:33.361855 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:33 crc kubenswrapper[4729]: I0127 06:49:33.361924 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:33 crc kubenswrapper[4729]: I0127 06:49:33.361866 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:33 crc kubenswrapper[4729]: E0127 06:49:33.362254 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:33 crc kubenswrapper[4729]: E0127 06:49:33.362372 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:33 crc kubenswrapper[4729]: E0127 06:49:33.362125 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:34 crc kubenswrapper[4729]: I0127 06:49:34.361978 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:34 crc kubenswrapper[4729]: E0127 06:49:34.362219 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:35 crc kubenswrapper[4729]: I0127 06:49:35.361892 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:35 crc kubenswrapper[4729]: I0127 06:49:35.361918 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:35 crc kubenswrapper[4729]: E0127 06:49:35.362218 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:35 crc kubenswrapper[4729]: I0127 06:49:35.362304 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:35 crc kubenswrapper[4729]: E0127 06:49:35.362438 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:35 crc kubenswrapper[4729]: E0127 06:49:35.362513 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:36 crc kubenswrapper[4729]: I0127 06:49:36.362279 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:36 crc kubenswrapper[4729]: E0127 06:49:36.363795 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:37 crc kubenswrapper[4729]: E0127 06:49:37.342118 4729 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:49:37 crc kubenswrapper[4729]: I0127 06:49:37.361944 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:37 crc kubenswrapper[4729]: I0127 06:49:37.362138 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:37 crc kubenswrapper[4729]: E0127 06:49:37.362215 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:37 crc kubenswrapper[4729]: I0127 06:49:37.362237 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:37 crc kubenswrapper[4729]: E0127 06:49:37.362369 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:37 crc kubenswrapper[4729]: E0127 06:49:37.362537 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:38 crc kubenswrapper[4729]: I0127 06:49:38.361967 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:38 crc kubenswrapper[4729]: E0127 06:49:38.362542 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:39 crc kubenswrapper[4729]: I0127 06:49:39.361834 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:39 crc kubenswrapper[4729]: E0127 06:49:39.362806 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:39 crc kubenswrapper[4729]: I0127 06:49:39.364371 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:39 crc kubenswrapper[4729]: E0127 06:49:39.365837 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:39 crc kubenswrapper[4729]: I0127 06:49:39.364626 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:39 crc kubenswrapper[4729]: E0127 06:49:39.366893 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:39 crc kubenswrapper[4729]: I0127 06:49:39.619695 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:49:40 crc kubenswrapper[4729]: I0127 06:49:40.362762 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:40 crc kubenswrapper[4729]: E0127 06:49:40.364023 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:41 crc kubenswrapper[4729]: I0127 06:49:41.361817 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:41 crc kubenswrapper[4729]: I0127 06:49:41.361853 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:41 crc kubenswrapper[4729]: I0127 06:49:41.361874 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:41 crc kubenswrapper[4729]: E0127 06:49:41.362748 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:41 crc kubenswrapper[4729]: E0127 06:49:41.363237 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:41 crc kubenswrapper[4729]: E0127 06:49:41.363480 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:42 crc kubenswrapper[4729]: E0127 06:49:42.344042 4729 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:49:42 crc kubenswrapper[4729]: I0127 06:49:42.362562 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:42 crc kubenswrapper[4729]: E0127 06:49:42.362761 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:42 crc kubenswrapper[4729]: I0127 06:49:42.363396 4729 scope.go:117] "RemoveContainer" containerID="b5f9b60cf226acea6d347a1512ed26a090d6de28dbfeececf0afc5ea16d58e39" Jan 27 06:49:43 crc kubenswrapper[4729]: I0127 06:49:43.226969 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45zq7_15e81784-44b6-45c7-a893-4b38366a1b5e/kube-multus/1.log" Jan 27 06:49:43 crc kubenswrapper[4729]: I0127 06:49:43.227049 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45zq7" event={"ID":"15e81784-44b6-45c7-a893-4b38366a1b5e","Type":"ContainerStarted","Data":"869eed51b4107feb9931f17a3753b814dcb492a499998ae5f14b6ef9d78d056e"} Jan 27 06:49:43 crc kubenswrapper[4729]: I0127 06:49:43.362269 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:43 crc kubenswrapper[4729]: I0127 06:49:43.362334 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:43 crc kubenswrapper[4729]: I0127 06:49:43.362341 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:43 crc kubenswrapper[4729]: E0127 06:49:43.362461 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:43 crc kubenswrapper[4729]: E0127 06:49:43.362604 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:43 crc kubenswrapper[4729]: E0127 06:49:43.362715 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:44 crc kubenswrapper[4729]: I0127 06:49:44.362570 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:44 crc kubenswrapper[4729]: E0127 06:49:44.362879 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:45 crc kubenswrapper[4729]: I0127 06:49:45.361595 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:45 crc kubenswrapper[4729]: I0127 06:49:45.361596 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:45 crc kubenswrapper[4729]: E0127 06:49:45.361916 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:49:45 crc kubenswrapper[4729]: I0127 06:49:45.361629 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:45 crc kubenswrapper[4729]: E0127 06:49:45.362050 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:49:45 crc kubenswrapper[4729]: E0127 06:49:45.362147 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:49:46 crc kubenswrapper[4729]: I0127 06:49:46.361991 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:46 crc kubenswrapper[4729]: E0127 06:49:46.363027 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqs5z" podUID="2c156b30-d262-4fdc-a70b-eb1703422f01" Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.243864 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244117 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:51:49.244040218 +0000 UTC m=+274.311161511 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.244313 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.244365 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.244418 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244490 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244569 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:51:49.244545172 +0000 UTC m=+274.311666435 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.244481 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244617 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244629 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244666 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244688 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244739 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:51:49.244708708 +0000 UTC m=+274.311830021 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244774 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:51:49.244757529 +0000 UTC m=+274.311878902 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244771 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244829 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244852 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:49:47 crc kubenswrapper[4729]: E0127 06:49:47.244927 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:51:49.244909284 +0000 UTC m=+274.312030587 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.362264 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.362315 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.362357 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.367472 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.367821 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.367866 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 06:49:47 crc kubenswrapper[4729]: I0127 06:49:47.368550 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 06:49:48 crc kubenswrapper[4729]: I0127 06:49:48.362727 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:49:48 crc kubenswrapper[4729]: I0127 06:49:48.365217 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 06:49:48 crc kubenswrapper[4729]: I0127 06:49:48.365587 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.278483 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.335168 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.335841 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.336106 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.337186 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.337824 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-th46k"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.338398 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.350978 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lsthv"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.351579 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.356154 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.357896 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.358604 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.368941 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.388015 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.388138 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.389561 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.389980 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.390109 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.390207 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416468 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91329e26-4dca-44c0-b703-f555d141e214-audit-dir\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416526 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dj5\" (UniqueName: \"kubernetes.io/projected/4810b36b-9b85-4f57-a5f0-3943e80c8386-kube-api-access-p7dj5\") pod \"openshift-apiserver-operator-796bbdcf4f-8jfj6\" (UID: \"4810b36b-9b85-4f57-a5f0-3943e80c8386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416551 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-config\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416570 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-images\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416600 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91329e26-4dca-44c0-b703-f555d141e214-serving-cert\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416628 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/91329e26-4dca-44c0-b703-f555d141e214-etcd-client\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416651 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b62aea-0185-4b23-998b-a210a5612512-serving-cert\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416663 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416685 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/91329e26-4dca-44c0-b703-f555d141e214-encryption-config\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416704 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs68p\" (UniqueName: \"kubernetes.io/projected/b9b62aea-0185-4b23-998b-a210a5612512-kube-api-access-hs68p\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416721 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91329e26-4dca-44c0-b703-f555d141e214-audit-policies\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416739 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-config\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416755 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c64cf35a-6747-43f9-bde5-c8060518bcda-serving-cert\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416775 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416774 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416908 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dvtb\" (UniqueName: \"kubernetes.io/projected/c64cf35a-6747-43f9-bde5-c8060518bcda-kube-api-access-2dvtb\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416938 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r99z\" (UniqueName: \"kubernetes.io/projected/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-kube-api-access-4r99z\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416962 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/91329e26-4dca-44c0-b703-f555d141e214-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.416984 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91329e26-4dca-44c0-b703-f555d141e214-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417036 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-config\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417038 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417057 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgnnh\" (UniqueName: \"kubernetes.io/projected/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-kube-api-access-kgnnh\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417117 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ms9v\" (UniqueName: \"kubernetes.io/projected/91329e26-4dca-44c0-b703-f555d141e214-kube-api-access-5ms9v\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417140 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-config\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417149 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417174 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4810b36b-9b85-4f57-a5f0-3943e80c8386-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8jfj6\" (UID: \"4810b36b-9b85-4f57-a5f0-3943e80c8386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417197 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-machine-approver-tls\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417221 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4810b36b-9b85-4f57-a5f0-3943e80c8386-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8jfj6\" (UID: \"4810b36b-9b85-4f57-a5f0-3943e80c8386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417242 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-auth-proxy-config\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417265 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-client-ca\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417306 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417331 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-client-ca\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.417476 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.418179 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.418281 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.418447 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.418546 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.418657 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.418676 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.418721 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.418865 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.418964 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.424065 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.425529 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.425761 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.426919 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.428222 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.428260 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.429179 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.433233 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.437804 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.437982 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.438057 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.438938 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6mcqw"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.439305 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.439424 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.439831 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.439950 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.440160 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.440264 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.440411 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.441272 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.441381 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.441505 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.444359 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.444877 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-q79v4"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.445671 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.447271 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.449535 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jlbz8"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.450275 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jlbz8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.454082 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.454701 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.455468 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.455896 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.456127 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fb2x5"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.456690 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.457991 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxkmt"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.458389 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.458872 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.461549 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-gw87z"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.462849 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vhwcv"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.463177 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.463718 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.471489 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-27hdf"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.472220 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.472478 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.480366 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.481037 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.485126 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.485713 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.485931 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nl6sq"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.486231 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.486498 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.486615 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.492016 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.495799 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.497909 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.498396 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.498771 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.502219 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nx6kp"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.503201 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.510108 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.514117 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.514604 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.514944 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.515207 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.515297 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.515410 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.516909 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.517040 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.517134 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.517268 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.517352 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.517454 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.517788 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.517873 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.517944 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.518062 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.518574 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.518956 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.522779 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7dj5\" (UniqueName: \"kubernetes.io/projected/4810b36b-9b85-4f57-a5f0-3943e80c8386-kube-api-access-p7dj5\") pod \"openshift-apiserver-operator-796bbdcf4f-8jfj6\" (UID: \"4810b36b-9b85-4f57-a5f0-3943e80c8386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.522814 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-config\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.522836 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-images\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.522865 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-dir\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.522887 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.522909 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/16e432e9-5557-4178-9589-2fdb42148c92-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nx6kp\" (UID: \"16e432e9-5557-4178-9589-2fdb42148c92\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.522931 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91329e26-4dca-44c0-b703-f555d141e214-serving-cert\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.522952 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4k7k\" (UniqueName: \"kubernetes.io/projected/6c9819c6-6d83-4ef1-94bd-038e573864d9-kube-api-access-w4k7k\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.522972 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1fbe809-ef9b-45f9-bbe6-937813005a23-etcd-service-ca\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.522991 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de054efe-cae5-4667-b75b-9b134cef5386-service-ca-bundle\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523011 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/91329e26-4dca-44c0-b703-f555d141e214-etcd-client\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523030 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523050 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsr67\" (UniqueName: \"kubernetes.io/projected/2bae531e-3aec-4cee-b651-5d04190e91d5-kube-api-access-gsr67\") pod \"openshift-config-operator-7777fb866f-d9qg8\" (UID: \"2bae531e-3aec-4cee-b651-5d04190e91d5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523088 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5520083-62f9-4b9f-bdc1-ca238d9a4f9e-srv-cert\") pod \"catalog-operator-68c6474976-7rp9c\" (UID: \"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523111 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b62aea-0185-4b23-998b-a210a5612512-serving-cert\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523132 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523164 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc0d7d79-ad0c-4a31-861d-45a209142c0e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523186 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1fbe809-ef9b-45f9-bbe6-937813005a23-etcd-client\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523205 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/91329e26-4dca-44c0-b703-f555d141e214-encryption-config\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523226 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bae531e-3aec-4cee-b651-5d04190e91d5-serving-cert\") pod \"openshift-config-operator-7777fb866f-d9qg8\" (UID: \"2bae531e-3aec-4cee-b651-5d04190e91d5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523245 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91329e26-4dca-44c0-b703-f555d141e214-audit-policies\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523264 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-config\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523283 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs68p\" (UniqueName: \"kubernetes.io/projected/b9b62aea-0185-4b23-998b-a210a5612512-kube-api-access-hs68p\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523303 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wc8x\" (UniqueName: \"kubernetes.io/projected/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-kube-api-access-9wc8x\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523325 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb6pt\" (UniqueName: \"kubernetes.io/projected/cc0d7d79-ad0c-4a31-861d-45a209142c0e-kube-api-access-fb6pt\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523346 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2bae531e-3aec-4cee-b651-5d04190e91d5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-d9qg8\" (UID: \"2bae531e-3aec-4cee-b651-5d04190e91d5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523370 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c64cf35a-6747-43f9-bde5-c8060518bcda-serving-cert\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523393 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523419 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb9535b-f866-4db2-8d3c-a0d9c8475684-metrics-tls\") pod \"dns-operator-744455d44c-27hdf\" (UID: \"8cb9535b-f866-4db2-8d3c-a0d9c8475684\") " pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523440 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zbzv\" (UniqueName: \"kubernetes.io/projected/de054efe-cae5-4667-b75b-9b134cef5386-kube-api-access-5zbzv\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523462 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k9lw\" (UniqueName: \"kubernetes.io/projected/b1fbe809-ef9b-45f9-bbe6-937813005a23-kube-api-access-6k9lw\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523487 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dvtb\" (UniqueName: \"kubernetes.io/projected/c64cf35a-6747-43f9-bde5-c8060518bcda-kube-api-access-2dvtb\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523507 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r99z\" (UniqueName: \"kubernetes.io/projected/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-kube-api-access-4r99z\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523529 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a90ba36b-29b6-4380-bc08-c2c385bebb76-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrf48\" (UID: \"a90ba36b-29b6-4380-bc08-c2c385bebb76\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523548 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dc71aa3-1214-4e05-b377-c47d7af89214-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d47tg\" (UID: \"5dc71aa3-1214-4e05-b377-c47d7af89214\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523569 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/91329e26-4dca-44c0-b703-f555d141e214-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523588 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91329e26-4dca-44c0-b703-f555d141e214-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523606 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523629 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de054efe-cae5-4667-b75b-9b134cef5386-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523647 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxx5k\" (UniqueName: \"kubernetes.io/projected/a5520083-62f9-4b9f-bdc1-ca238d9a4f9e-kube-api-access-sxx5k\") pod \"catalog-operator-68c6474976-7rp9c\" (UID: \"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523676 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de054efe-cae5-4667-b75b-9b134cef5386-config\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523698 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a19963-7dfa-4c9b-b840-5fe912fcea71-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-swdwp\" (UID: \"c4a19963-7dfa-4c9b-b840-5fe912fcea71\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523720 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-config\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523741 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523761 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90ba36b-29b6-4380-bc08-c2c385bebb76-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrf48\" (UID: \"a90ba36b-29b6-4380-bc08-c2c385bebb76\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523794 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523817 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcqlf\" (UID: \"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523836 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/491db327-a9d5-420d-bef2-34193c435226-srv-cert\") pod \"olm-operator-6b444d44fb-6fq6s\" (UID: \"491db327-a9d5-420d-bef2-34193c435226\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523855 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/491db327-a9d5-420d-bef2-34193c435226-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6fq6s\" (UID: \"491db327-a9d5-420d-bef2-34193c435226\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523876 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8sk5\" (UniqueName: \"kubernetes.io/projected/f2e9569f-7ac6-45ed-bc0f-8d989c015d98-kube-api-access-x8sk5\") pod \"cluster-samples-operator-665b6dd947-bbn5f\" (UID: \"f2e9569f-7ac6-45ed-bc0f-8d989c015d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523901 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sj24\" (UniqueName: \"kubernetes.io/projected/849b4067-5e9e-4864-912a-d5a7aa747232-kube-api-access-6sj24\") pod \"downloads-7954f5f757-jlbz8\" (UID: \"849b4067-5e9e-4864-912a-d5a7aa747232\") " pod="openshift-console/downloads-7954f5f757-jlbz8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523924 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ms9v\" (UniqueName: \"kubernetes.io/projected/91329e26-4dca-44c0-b703-f555d141e214-kube-api-access-5ms9v\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.523943 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-config\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.545261 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.545482 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.545589 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.545683 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.545776 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.547839 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.548111 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.539538 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551420 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551485 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c8bm\" (UniqueName: \"kubernetes.io/projected/16e432e9-5557-4178-9589-2fdb42148c92-kube-api-access-5c8bm\") pod \"multus-admission-controller-857f4d67dd-nx6kp\" (UID: \"16e432e9-5557-4178-9589-2fdb42148c92\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551538 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgnnh\" (UniqueName: \"kubernetes.io/projected/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-kube-api-access-kgnnh\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551591 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4810b36b-9b85-4f57-a5f0-3943e80c8386-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8jfj6\" (UID: \"4810b36b-9b85-4f57-a5f0-3943e80c8386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551626 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-machine-approver-tls\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551698 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4810b36b-9b85-4f57-a5f0-3943e80c8386-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8jfj6\" (UID: \"4810b36b-9b85-4f57-a5f0-3943e80c8386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551735 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-auth-proxy-config\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551767 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zztl5\" (UniqueName: \"kubernetes.io/projected/6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e-kube-api-access-zztl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcqlf\" (UID: \"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551803 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc0d7d79-ad0c-4a31-861d-45a209142c0e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551837 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fbe809-ef9b-45f9-bbe6-937813005a23-config\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551884 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-client-ca\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551910 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-policies\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551942 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d7d79-ad0c-4a31-861d-45a209142c0e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.551974 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fbe809-ef9b-45f9-bbe6-937813005a23-serving-cert\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552003 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5520083-62f9-4b9f-bdc1-ca238d9a4f9e-profile-collector-cert\") pod \"catalog-operator-68c6474976-7rp9c\" (UID: \"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552054 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552114 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jlts\" (UniqueName: \"kubernetes.io/projected/8cb9535b-f866-4db2-8d3c-a0d9c8475684-kube-api-access-6jlts\") pod \"dns-operator-744455d44c-27hdf\" (UID: \"8cb9535b-f866-4db2-8d3c-a0d9c8475684\") " pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552227 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcqlf\" (UID: \"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552262 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552299 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9569f-7ac6-45ed-bc0f-8d989c015d98-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bbn5f\" (UID: \"f2e9569f-7ac6-45ed-bc0f-8d989c015d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552325 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dc71aa3-1214-4e05-b377-c47d7af89214-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d47tg\" (UID: \"5dc71aa3-1214-4e05-b377-c47d7af89214\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552355 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b1fbe809-ef9b-45f9-bbe6-937813005a23-etcd-ca\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552388 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-client-ca\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552419 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552532 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90ba36b-29b6-4380-bc08-c2c385bebb76-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrf48\" (UID: \"a90ba36b-29b6-4380-bc08-c2c385bebb76\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552562 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4a19963-7dfa-4c9b-b840-5fe912fcea71-proxy-tls\") pod \"machine-config-controller-84d6567774-swdwp\" (UID: \"c4a19963-7dfa-4c9b-b840-5fe912fcea71\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552595 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-config\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552626 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552665 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdfhj\" (UniqueName: \"kubernetes.io/projected/491db327-a9d5-420d-bef2-34193c435226-kube-api-access-fdfhj\") pod \"olm-operator-6b444d44fb-6fq6s\" (UID: \"491db327-a9d5-420d-bef2-34193c435226\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552693 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc71aa3-1214-4e05-b377-c47d7af89214-config\") pod \"kube-apiserver-operator-766d6c64bb-d47tg\" (UID: \"5dc71aa3-1214-4e05-b377-c47d7af89214\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552747 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4tf4\" (UniqueName: \"kubernetes.io/projected/c4a19963-7dfa-4c9b-b840-5fe912fcea71-kube-api-access-v4tf4\") pod \"machine-config-controller-84d6567774-swdwp\" (UID: \"c4a19963-7dfa-4c9b-b840-5fe912fcea71\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552787 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91329e26-4dca-44c0-b703-f555d141e214-audit-dir\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552812 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-trusted-ca\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552843 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de054efe-cae5-4667-b75b-9b134cef5386-serving-cert\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.552874 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-serving-cert\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.553809 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.555599 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.564834 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-config\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.566083 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-client-ca\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.567560 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.568213 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.609327 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.609616 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.610278 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.611055 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.611449 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.611653 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.611755 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.611985 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.612083 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.612200 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.612363 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.613208 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91329e26-4dca-44c0-b703-f555d141e214-audit-policies\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.613603 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91329e26-4dca-44c0-b703-f555d141e214-audit-dir\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.618654 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-config\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.619297 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/91329e26-4dca-44c0-b703-f555d141e214-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.619773 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-auth-proxy-config\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.620223 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.620293 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4810b36b-9b85-4f57-a5f0-3943e80c8386-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8jfj6\" (UID: \"4810b36b-9b85-4f57-a5f0-3943e80c8386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.620384 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.620562 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.620656 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91329e26-4dca-44c0-b703-f555d141e214-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.626325 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4810b36b-9b85-4f57-a5f0-3943e80c8386-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8jfj6\" (UID: \"4810b36b-9b85-4f57-a5f0-3943e80c8386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.627177 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-images\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.627490 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-config\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.627829 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.628282 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-client-ca\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.628379 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.628434 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.628531 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.628653 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.628774 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.628845 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.628899 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.630021 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-machine-approver-tls\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.630288 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.630661 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.630806 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.630968 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.631195 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.631321 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.631443 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.631568 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.631906 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.632062 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.632217 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.632371 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.632516 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.633379 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.633492 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.633641 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.633872 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.637775 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.640249 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.641185 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.641486 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.641858 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.642383 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.644262 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.644275 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.644805 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.651778 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.651831 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.652368 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.652585 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.652813 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.652951 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-th46k"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653015 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.652977 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653677 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9569f-7ac6-45ed-bc0f-8d989c015d98-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bbn5f\" (UID: \"f2e9569f-7ac6-45ed-bc0f-8d989c015d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653711 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dc71aa3-1214-4e05-b377-c47d7af89214-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d47tg\" (UID: \"5dc71aa3-1214-4e05-b377-c47d7af89214\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653733 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b1fbe809-ef9b-45f9-bbe6-937813005a23-etcd-ca\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653753 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653770 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90ba36b-29b6-4380-bc08-c2c385bebb76-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrf48\" (UID: \"a90ba36b-29b6-4380-bc08-c2c385bebb76\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653790 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4a19963-7dfa-4c9b-b840-5fe912fcea71-proxy-tls\") pod \"machine-config-controller-84d6567774-swdwp\" (UID: \"c4a19963-7dfa-4c9b-b840-5fe912fcea71\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653807 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-config\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653824 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653841 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdfhj\" (UniqueName: \"kubernetes.io/projected/491db327-a9d5-420d-bef2-34193c435226-kube-api-access-fdfhj\") pod \"olm-operator-6b444d44fb-6fq6s\" (UID: \"491db327-a9d5-420d-bef2-34193c435226\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653857 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc71aa3-1214-4e05-b377-c47d7af89214-config\") pod \"kube-apiserver-operator-766d6c64bb-d47tg\" (UID: \"5dc71aa3-1214-4e05-b377-c47d7af89214\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653886 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4tf4\" (UniqueName: \"kubernetes.io/projected/c4a19963-7dfa-4c9b-b840-5fe912fcea71-kube-api-access-v4tf4\") pod \"machine-config-controller-84d6567774-swdwp\" (UID: \"c4a19963-7dfa-4c9b-b840-5fe912fcea71\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653905 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-trusted-ca\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653921 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de054efe-cae5-4667-b75b-9b134cef5386-serving-cert\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653951 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-serving-cert\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653968 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-dir\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.653987 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654003 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/16e432e9-5557-4178-9589-2fdb42148c92-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nx6kp\" (UID: \"16e432e9-5557-4178-9589-2fdb42148c92\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654041 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4k7k\" (UniqueName: \"kubernetes.io/projected/6c9819c6-6d83-4ef1-94bd-038e573864d9-kube-api-access-w4k7k\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654057 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de054efe-cae5-4667-b75b-9b134cef5386-service-ca-bundle\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654094 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1fbe809-ef9b-45f9-bbe6-937813005a23-etcd-service-ca\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654117 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654135 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsr67\" (UniqueName: \"kubernetes.io/projected/2bae531e-3aec-4cee-b651-5d04190e91d5-kube-api-access-gsr67\") pod \"openshift-config-operator-7777fb866f-d9qg8\" (UID: \"2bae531e-3aec-4cee-b651-5d04190e91d5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654153 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5520083-62f9-4b9f-bdc1-ca238d9a4f9e-srv-cert\") pod \"catalog-operator-68c6474976-7rp9c\" (UID: \"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654180 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654214 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc0d7d79-ad0c-4a31-861d-45a209142c0e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654232 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1fbe809-ef9b-45f9-bbe6-937813005a23-etcd-client\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654250 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bae531e-3aec-4cee-b651-5d04190e91d5-serving-cert\") pod \"openshift-config-operator-7777fb866f-d9qg8\" (UID: \"2bae531e-3aec-4cee-b651-5d04190e91d5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654275 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wc8x\" (UniqueName: \"kubernetes.io/projected/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-kube-api-access-9wc8x\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654292 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb6pt\" (UniqueName: \"kubernetes.io/projected/cc0d7d79-ad0c-4a31-861d-45a209142c0e-kube-api-access-fb6pt\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654313 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2bae531e-3aec-4cee-b651-5d04190e91d5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-d9qg8\" (UID: \"2bae531e-3aec-4cee-b651-5d04190e91d5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654336 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zbzv\" (UniqueName: \"kubernetes.io/projected/de054efe-cae5-4667-b75b-9b134cef5386-kube-api-access-5zbzv\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654353 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k9lw\" (UniqueName: \"kubernetes.io/projected/b1fbe809-ef9b-45f9-bbe6-937813005a23-kube-api-access-6k9lw\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654371 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb9535b-f866-4db2-8d3c-a0d9c8475684-metrics-tls\") pod \"dns-operator-744455d44c-27hdf\" (UID: \"8cb9535b-f866-4db2-8d3c-a0d9c8475684\") " pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654387 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a90ba36b-29b6-4380-bc08-c2c385bebb76-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrf48\" (UID: \"a90ba36b-29b6-4380-bc08-c2c385bebb76\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654404 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dc71aa3-1214-4e05-b377-c47d7af89214-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d47tg\" (UID: \"5dc71aa3-1214-4e05-b377-c47d7af89214\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654431 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654448 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de054efe-cae5-4667-b75b-9b134cef5386-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654464 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxx5k\" (UniqueName: \"kubernetes.io/projected/a5520083-62f9-4b9f-bdc1-ca238d9a4f9e-kube-api-access-sxx5k\") pod \"catalog-operator-68c6474976-7rp9c\" (UID: \"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654482 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a19963-7dfa-4c9b-b840-5fe912fcea71-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-swdwp\" (UID: \"c4a19963-7dfa-4c9b-b840-5fe912fcea71\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654504 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de054efe-cae5-4667-b75b-9b134cef5386-config\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654528 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654544 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90ba36b-29b6-4380-bc08-c2c385bebb76-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrf48\" (UID: \"a90ba36b-29b6-4380-bc08-c2c385bebb76\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654562 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654579 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/491db327-a9d5-420d-bef2-34193c435226-srv-cert\") pod \"olm-operator-6b444d44fb-6fq6s\" (UID: \"491db327-a9d5-420d-bef2-34193c435226\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654607 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/491db327-a9d5-420d-bef2-34193c435226-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6fq6s\" (UID: \"491db327-a9d5-420d-bef2-34193c435226\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654623 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8sk5\" (UniqueName: \"kubernetes.io/projected/f2e9569f-7ac6-45ed-bc0f-8d989c015d98-kube-api-access-x8sk5\") pod \"cluster-samples-operator-665b6dd947-bbn5f\" (UID: \"f2e9569f-7ac6-45ed-bc0f-8d989c015d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654642 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcqlf\" (UID: \"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654660 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sj24\" (UniqueName: \"kubernetes.io/projected/849b4067-5e9e-4864-912a-d5a7aa747232-kube-api-access-6sj24\") pod \"downloads-7954f5f757-jlbz8\" (UID: \"849b4067-5e9e-4864-912a-d5a7aa747232\") " pod="openshift-console/downloads-7954f5f757-jlbz8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654678 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c8bm\" (UniqueName: \"kubernetes.io/projected/16e432e9-5557-4178-9589-2fdb42148c92-kube-api-access-5c8bm\") pod \"multus-admission-controller-857f4d67dd-nx6kp\" (UID: \"16e432e9-5557-4178-9589-2fdb42148c92\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654704 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654720 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654744 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zztl5\" (UniqueName: \"kubernetes.io/projected/6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e-kube-api-access-zztl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcqlf\" (UID: \"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654759 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc0d7d79-ad0c-4a31-861d-45a209142c0e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654774 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fbe809-ef9b-45f9-bbe6-937813005a23-config\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654789 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d7d79-ad0c-4a31-861d-45a209142c0e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654804 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fbe809-ef9b-45f9-bbe6-937813005a23-serving-cert\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654820 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5520083-62f9-4b9f-bdc1-ca238d9a4f9e-profile-collector-cert\") pod \"catalog-operator-68c6474976-7rp9c\" (UID: \"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654837 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-policies\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654856 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654874 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jlts\" (UniqueName: \"kubernetes.io/projected/8cb9535b-f866-4db2-8d3c-a0d9c8475684-kube-api-access-6jlts\") pod \"dns-operator-744455d44c-27hdf\" (UID: \"8cb9535b-f866-4db2-8d3c-a0d9c8475684\") " pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.654889 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcqlf\" (UID: \"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.655450 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcqlf\" (UID: \"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.657518 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b1fbe809-ef9b-45f9-bbe6-937813005a23-etcd-ca\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.658117 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de054efe-cae5-4667-b75b-9b134cef5386-service-ca-bundle\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.658563 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1fbe809-ef9b-45f9-bbe6-937813005a23-etcd-service-ca\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.662025 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.662464 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/91329e26-4dca-44c0-b703-f555d141e214-etcd-client\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.663121 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.664137 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-config\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.664637 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.664776 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.666646 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.671797 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b62aea-0185-4b23-998b-a210a5612512-serving-cert\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.672011 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/91329e26-4dca-44c0-b703-f555d141e214-encryption-config\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.672754 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.673163 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.696000 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2bae531e-3aec-4cee-b651-5d04190e91d5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-d9qg8\" (UID: \"2bae531e-3aec-4cee-b651-5d04190e91d5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.696657 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9569f-7ac6-45ed-bc0f-8d989c015d98-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bbn5f\" (UID: \"f2e9569f-7ac6-45ed-bc0f-8d989c015d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.697003 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h5cd4"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.697901 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.698761 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.702041 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-trusted-ca\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.702787 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.706288 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.706358 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.706668 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.709960 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-dir\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.713059 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91329e26-4dca-44c0-b703-f555d141e214-serving-cert\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.713379 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-config\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.713786 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.714157 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de054efe-cae5-4667-b75b-9b134cef5386-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.715043 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a19963-7dfa-4c9b-b840-5fe912fcea71-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-swdwp\" (UID: \"c4a19963-7dfa-4c9b-b840-5fe912fcea71\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.715459 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de054efe-cae5-4667-b75b-9b134cef5386-config\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.723749 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.723997 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c64cf35a-6747-43f9-bde5-c8060518bcda-serving-cert\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.724476 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb9535b-f866-4db2-8d3c-a0d9c8475684-metrics-tls\") pod \"dns-operator-744455d44c-27hdf\" (UID: \"8cb9535b-f866-4db2-8d3c-a0d9c8475684\") " pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.726832 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.732574 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bae531e-3aec-4cee-b651-5d04190e91d5-serving-cert\") pod \"openshift-config-operator-7777fb866f-d9qg8\" (UID: \"2bae531e-3aec-4cee-b651-5d04190e91d5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.732580 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.733526 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.735964 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcqlf\" (UID: \"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.737185 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-serving-cert\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.740864 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.741144 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de054efe-cae5-4667-b75b-9b134cef5386-serving-cert\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.741192 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc0d7d79-ad0c-4a31-861d-45a209142c0e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.742604 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fbe809-ef9b-45f9-bbe6-937813005a23-config\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.742764 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.743092 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.743533 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9l2jg"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.743730 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.744059 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1fbe809-ef9b-45f9-bbe6-937813005a23-etcd-client\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.744300 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.745023 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.745049 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.745493 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.745543 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.746627 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-policies\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.749252 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.749308 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.751790 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lsthv"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.754593 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.755405 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6mcqw"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.756565 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.757379 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d7d79-ad0c-4a31-861d-45a209142c0e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.758283 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.758692 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jlbz8"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.758760 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bhh2l"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.759052 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fbe809-ef9b-45f9-bbe6-937813005a23-serving-cert\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.759365 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.760868 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-znw8f"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.762638 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-q79v4"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.762794 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.765673 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fb2x5"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.767357 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.767410 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.769778 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nx6kp"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.770902 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.770986 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.771238 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.771329 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4a19963-7dfa-4c9b-b840-5fe912fcea71-proxy-tls\") pod \"machine-config-controller-84d6567774-swdwp\" (UID: \"c4a19963-7dfa-4c9b-b840-5fe912fcea71\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.772218 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxkmt"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.774473 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-m4pm9"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.778608 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.781509 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.781686 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.781681 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.782291 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.782389 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-27hdf"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.782449 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.783862 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vhwcv"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.785326 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.786853 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7p9gf"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.788194 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9l2jg"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.788479 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.789472 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.791159 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4g6cc"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.791961 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4g6cc" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.792024 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.793291 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.794299 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gw87z"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.796453 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.796593 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m4pm9"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.797534 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7p9gf"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.798487 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.802612 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-znw8f"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.803590 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.805091 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.807022 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.816364 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.817731 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.820976 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h5cd4"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.822399 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4g6cc"] Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.844420 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.861822 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.879230 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.898522 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.918081 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.939576 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.958670 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.965546 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a90ba36b-29b6-4380-bc08-c2c385bebb76-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrf48\" (UID: \"a90ba36b-29b6-4380-bc08-c2c385bebb76\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.982424 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.984371 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90ba36b-29b6-4380-bc08-c2c385bebb76-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrf48\" (UID: \"a90ba36b-29b6-4380-bc08-c2c385bebb76\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:53 crc kubenswrapper[4729]: I0127 06:49:53.998936 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.018877 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.038272 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.046321 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dc71aa3-1214-4e05-b377-c47d7af89214-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d47tg\" (UID: \"5dc71aa3-1214-4e05-b377-c47d7af89214\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.059663 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.065666 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc71aa3-1214-4e05-b377-c47d7af89214-config\") pod \"kube-apiserver-operator-766d6c64bb-d47tg\" (UID: \"5dc71aa3-1214-4e05-b377-c47d7af89214\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.079150 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.089621 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/16e432e9-5557-4178-9589-2fdb42148c92-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nx6kp\" (UID: \"16e432e9-5557-4178-9589-2fdb42148c92\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.098838 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.123938 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.139029 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.146247 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5520083-62f9-4b9f-bdc1-ca238d9a4f9e-srv-cert\") pod \"catalog-operator-68c6474976-7rp9c\" (UID: \"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.158389 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.166576 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/491db327-a9d5-420d-bef2-34193c435226-srv-cert\") pod \"olm-operator-6b444d44fb-6fq6s\" (UID: \"491db327-a9d5-420d-bef2-34193c435226\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.178725 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.198021 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.207721 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5520083-62f9-4b9f-bdc1-ca238d9a4f9e-profile-collector-cert\") pod \"catalog-operator-68c6474976-7rp9c\" (UID: \"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.208335 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/491db327-a9d5-420d-bef2-34193c435226-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6fq6s\" (UID: \"491db327-a9d5-420d-bef2-34193c435226\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.218455 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.278924 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.282193 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgnnh\" (UniqueName: \"kubernetes.io/projected/b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4-kube-api-access-kgnnh\") pod \"machine-api-operator-5694c8668f-th46k\" (UID: \"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.299988 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.319642 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.339489 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.359253 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.379510 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.399183 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.419058 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.439007 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.459630 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.499492 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.504495 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dvtb\" (UniqueName: \"kubernetes.io/projected/c64cf35a-6747-43f9-bde5-c8060518bcda-kube-api-access-2dvtb\") pod \"route-controller-manager-6576b87f9c-vzdl2\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.518641 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.539541 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.553297 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.577043 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.580822 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs68p\" (UniqueName: \"kubernetes.io/projected/b9b62aea-0185-4b23-998b-a210a5612512-kube-api-access-hs68p\") pod \"controller-manager-879f6c89f-lsthv\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.589738 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.598420 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r99z\" (UniqueName: \"kubernetes.io/projected/859f2a44-973c-4a21-97bd-6ba4ffd8b68a-kube-api-access-4r99z\") pod \"machine-approver-56656f9798-zs5fn\" (UID: \"859f2a44-973c-4a21-97bd-6ba4ffd8b68a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.604376 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.616728 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7dj5\" (UniqueName: \"kubernetes.io/projected/4810b36b-9b85-4f57-a5f0-3943e80c8386-kube-api-access-p7dj5\") pod \"openshift-apiserver-operator-796bbdcf4f-8jfj6\" (UID: \"4810b36b-9b85-4f57-a5f0-3943e80c8386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.637056 4729 request.go:700] Waited for 1.003691509s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-665b6dd947-bbn5f Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.639711 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ms9v\" (UniqueName: \"kubernetes.io/projected/91329e26-4dca-44c0-b703-f555d141e214-kube-api-access-5ms9v\") pod \"apiserver-7bbb656c7d-4zb87\" (UID: \"91329e26-4dca-44c0-b703-f555d141e214\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.660879 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.680376 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.717607 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zbzv\" (UniqueName: \"kubernetes.io/projected/de054efe-cae5-4667-b75b-9b134cef5386-kube-api-access-5zbzv\") pod \"authentication-operator-69f744f599-6mcqw\" (UID: \"de054efe-cae5-4667-b75b-9b134cef5386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.733189 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dc71aa3-1214-4e05-b377-c47d7af89214-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d47tg\" (UID: \"5dc71aa3-1214-4e05-b377-c47d7af89214\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.756378 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4k7k\" (UniqueName: \"kubernetes.io/projected/6c9819c6-6d83-4ef1-94bd-038e573864d9-kube-api-access-w4k7k\") pod \"oauth-openshift-558db77b4-fb2x5\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.772461 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.772802 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsr67\" (UniqueName: \"kubernetes.io/projected/2bae531e-3aec-4cee-b651-5d04190e91d5-kube-api-access-gsr67\") pod \"openshift-config-operator-7777fb866f-d9qg8\" (UID: \"2bae531e-3aec-4cee-b651-5d04190e91d5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.793522 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc0d7d79-ad0c-4a31-861d-45a209142c0e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.813326 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90ba36b-29b6-4380-bc08-c2c385bebb76-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zrf48\" (UID: \"a90ba36b-29b6-4380-bc08-c2c385bebb76\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.832574 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdfhj\" (UniqueName: \"kubernetes.io/projected/491db327-a9d5-420d-bef2-34193c435226-kube-api-access-fdfhj\") pod \"olm-operator-6b444d44fb-6fq6s\" (UID: \"491db327-a9d5-420d-bef2-34193c435226\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.853711 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8sk5\" (UniqueName: \"kubernetes.io/projected/f2e9569f-7ac6-45ed-bc0f-8d989c015d98-kube-api-access-x8sk5\") pod \"cluster-samples-operator-665b6dd947-bbn5f\" (UID: \"f2e9569f-7ac6-45ed-bc0f-8d989c015d98\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.866375 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.879112 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k9lw\" (UniqueName: \"kubernetes.io/projected/b1fbe809-ef9b-45f9-bbe6-937813005a23-kube-api-access-6k9lw\") pod \"etcd-operator-b45778765-vhwcv\" (UID: \"b1fbe809-ef9b-45f9-bbe6-937813005a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.889275 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lsthv"] Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.893112 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.898948 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4tf4\" (UniqueName: \"kubernetes.io/projected/c4a19963-7dfa-4c9b-b840-5fe912fcea71-kube-api-access-v4tf4\") pod \"machine-config-controller-84d6567774-swdwp\" (UID: \"c4a19963-7dfa-4c9b-b840-5fe912fcea71\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.914324 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wc8x\" (UniqueName: \"kubernetes.io/projected/5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b-kube-api-access-9wc8x\") pod \"console-operator-58897d9998-q79v4\" (UID: \"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b\") " pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.937714 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb6pt\" (UniqueName: \"kubernetes.io/projected/cc0d7d79-ad0c-4a31-861d-45a209142c0e-kube-api-access-fb6pt\") pod \"cluster-image-registry-operator-dc59b4c8b-f6ptg\" (UID: \"cc0d7d79-ad0c-4a31-861d-45a209142c0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.943539 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.954781 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.959599 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sj24\" (UniqueName: \"kubernetes.io/projected/849b4067-5e9e-4864-912a-d5a7aa747232-kube-api-access-6sj24\") pod \"downloads-7954f5f757-jlbz8\" (UID: \"849b4067-5e9e-4864-912a-d5a7aa747232\") " pod="openshift-console/downloads-7954f5f757-jlbz8" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.972301 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.972699 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.979788 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.992616 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg"] Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.992183 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:54 crc kubenswrapper[4729]: I0127 06:49:54.995939 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c8bm\" (UniqueName: \"kubernetes.io/projected/16e432e9-5557-4178-9589-2fdb42148c92-kube-api-access-5c8bm\") pod \"multus-admission-controller-857f4d67dd-nx6kp\" (UID: \"16e432e9-5557-4178-9589-2fdb42148c92\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.010845 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.011451 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.017165 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.018663 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.025223 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jlbz8" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.032822 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-th46k"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.040962 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.042201 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.048211 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.064536 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.064792 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.079954 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.080484 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.088416 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.104146 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.106284 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.122523 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.134636 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.139146 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 06:49:55 crc kubenswrapper[4729]: W0127 06:49:55.157125 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4810b36b_9b85_4f57_a5f0_3943e80c8386.slice/crio-0ccbcb4e9adc01d0abd6c24dc521da2f27d59fe88fc7c937fe1f063748983985 WatchSource:0}: Error finding container 0ccbcb4e9adc01d0abd6c24dc521da2f27d59fe88fc7c937fe1f063748983985: Status 404 returned error can't find the container with id 0ccbcb4e9adc01d0abd6c24dc521da2f27d59fe88fc7c937fe1f063748983985 Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.158893 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.179686 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.200308 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.211855 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.246167 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxx5k\" (UniqueName: \"kubernetes.io/projected/a5520083-62f9-4b9f-bdc1-ca238d9a4f9e-kube-api-access-sxx5k\") pod \"catalog-operator-68c6474976-7rp9c\" (UID: \"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.260835 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zztl5\" (UniqueName: \"kubernetes.io/projected/6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e-kube-api-access-zztl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-xcqlf\" (UID: \"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.279284 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.287382 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jlts\" (UniqueName: \"kubernetes.io/projected/8cb9535b-f866-4db2-8d3c-a0d9c8475684-kube-api-access-6jlts\") pod \"dns-operator-744455d44c-27hdf\" (UID: \"8cb9535b-f866-4db2-8d3c-a0d9c8475684\") " pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.295377 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" event={"ID":"859f2a44-973c-4a21-97bd-6ba4ffd8b68a","Type":"ContainerStarted","Data":"5b512aec0031f062a5d998432ef0840391b25c52daa016d6e382c3ca3c7a8ecb"} Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.295427 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" event={"ID":"859f2a44-973c-4a21-97bd-6ba4ffd8b68a","Type":"ContainerStarted","Data":"a564186fe73922ff5881ab4b78f6869aa22e952055fb1bfb42245983ebe38b0b"} Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.298544 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.299812 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" event={"ID":"5dc71aa3-1214-4e05-b377-c47d7af89214","Type":"ContainerStarted","Data":"4cc52fc1ad176d9d5af4e8f655f3ad8f357ad71b5875aa7d2006095e39e6b154"} Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.306551 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.315762 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" event={"ID":"c64cf35a-6747-43f9-bde5-c8060518bcda","Type":"ContainerStarted","Data":"c6e9e92070b51c04aef0bd93a5c7e46e5e86980032f95442b5c24254a61ed460"} Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.317943 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.318512 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" event={"ID":"91329e26-4dca-44c0-b703-f555d141e214","Type":"ContainerStarted","Data":"65af2931c0d5244faa22b623b640a2a410371ca38bb602a759e6883c30627cd3"} Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.334372 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" event={"ID":"4810b36b-9b85-4f57-a5f0-3943e80c8386","Type":"ContainerStarted","Data":"0ccbcb4e9adc01d0abd6c24dc521da2f27d59fe88fc7c937fe1f063748983985"} Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.343800 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.354250 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" event={"ID":"b9b62aea-0185-4b23-998b-a210a5612512","Type":"ContainerStarted","Data":"032b999a2619a1a156fa8896ae6c902e4136287f895264e8b052136adc00e7a4"} Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.354292 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" event={"ID":"b9b62aea-0185-4b23-998b-a210a5612512","Type":"ContainerStarted","Data":"db04705c2a10f960cb387459fde22771749bf1df0472ed46ea7fc1ae00a07be7"} Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.354448 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.363544 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.371211 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" event={"ID":"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4","Type":"ContainerStarted","Data":"27339d398f99b87a447239fc3aabdfee2d02cacd64416602f49b7114ddb1504b"} Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.378839 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.394321 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.404303 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.408538 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-q79v4"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.419393 4729 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lsthv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.419448 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" podUID="b9b62aea-0185-4b23-998b-a210a5612512" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.419824 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.436488 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.438059 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.467015 4729 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.478356 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.503389 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.523566 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.540718 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.559020 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.578157 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.578210 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6mcqw"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.583397 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.600110 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.619534 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.628981 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vhwcv"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.638095 4729 request.go:700] Waited for 1.849092891s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0 Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.642554 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.658855 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.689231 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.698369 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.721775 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.747128 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.753612 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fb2x5"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.763942 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.783282 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.823572 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.825615 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.835369 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nx6kp"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.836563 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.839045 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.888098 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-27hdf"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.891455 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5b3c2904-e0f2-436a-8172-6639cc9661a9-stats-auth\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.891500 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwjh5\" (UniqueName: \"kubernetes.io/projected/5b3c2904-e0f2-436a-8172-6639cc9661a9-kube-api-access-cwjh5\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.891538 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.891570 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-serving-cert\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.891591 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8fba97f-964e-45a8-8265-2c21dcb75903-metrics-tls\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.892183 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85aba95-efd2-4600-8451-bc92e280b4da-config\") pod \"kube-controller-manager-operator-78b949d7b-kd7hx\" (UID: \"b85aba95-efd2-4600-8451-bc92e280b4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.892216 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5b3c2904-e0f2-436a-8172-6639cc9661a9-default-certificate\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.892336 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8fba97f-964e-45a8-8265-2c21dcb75903-trusted-ca\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.892374 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e2c5da4-8ac5-4e80-b351-feffc47032e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.892393 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-oauth-config\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.892412 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t5zb\" (UniqueName: \"kubernetes.io/projected/d8fba97f-964e-45a8-8265-2c21dcb75903-kube-api-access-7t5zb\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.892432 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/43a6c23c-78b7-4a13-b1a4-efab2dc70130-kube-api-access-nrh97\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.892645 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c"] Jan 27 06:49:55 crc kubenswrapper[4729]: E0127 06:49:55.893096 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:56.393063987 +0000 UTC m=+161.460185240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.893187 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b3c2904-e0f2-436a-8172-6639cc9661a9-service-ca-bundle\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.893217 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b3c2904-e0f2-436a-8172-6639cc9661a9-metrics-certs\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.893234 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-trusted-ca-bundle\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.893255 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-bound-sa-token\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902030 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b85aba95-efd2-4600-8451-bc92e280b4da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kd7hx\" (UID: \"b85aba95-efd2-4600-8451-bc92e280b4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902326 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b85aba95-efd2-4600-8451-bc92e280b4da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kd7hx\" (UID: \"b85aba95-efd2-4600-8451-bc92e280b4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902353 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-oauth-serving-cert\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902427 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-trusted-ca\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902452 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-config\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902480 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7zvh\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-kube-api-access-z7zvh\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902518 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-certificates\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902549 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8fba97f-964e-45a8-8265-2c21dcb75903-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902575 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-tls\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902590 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e2c5da4-8ac5-4e80-b351-feffc47032e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.902632 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-service-ca\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.928210 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jlbz8"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.933953 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48"] Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.960411 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp"] Jan 27 06:49:55 crc kubenswrapper[4729]: W0127 06:49:55.987302 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod849b4067_5e9e_4864_912a_d5a7aa747232.slice/crio-b5b0729c7af282f59a56ad056faf8e2847f41ea35bc20de500931c4a42979a29 WatchSource:0}: Error finding container b5b0729c7af282f59a56ad056faf8e2847f41ea35bc20de500931c4a42979a29: Status 404 returned error can't find the container with id b5b0729c7af282f59a56ad056faf8e2847f41ea35bc20de500931c4a42979a29 Jan 27 06:49:55 crc kubenswrapper[4729]: I0127 06:49:55.990057 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf"] Jan 27 06:49:55 crc kubenswrapper[4729]: W0127 06:49:55.997164 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4a19963_7dfa_4c9b_b840_5fe912fcea71.slice/crio-71b0808f13661a9ca87c05114c02ea96eefa88c337f2074bebbc784b8dde1614 WatchSource:0}: Error finding container 71b0808f13661a9ca87c05114c02ea96eefa88c337f2074bebbc784b8dde1614: Status 404 returned error can't find the container with id 71b0808f13661a9ca87c05114c02ea96eefa88c337f2074bebbc784b8dde1614 Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.003860 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.004927 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9z4g\" (UniqueName: \"kubernetes.io/projected/c5821305-00b6-46d4-8ba2-f9930b07a9c0-kube-api-access-s9z4g\") pod \"machine-config-server-bhh2l\" (UID: \"c5821305-00b6-46d4-8ba2-f9930b07a9c0\") " pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.004970 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdvwh\" (UniqueName: \"kubernetes.io/projected/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-kube-api-access-pdvwh\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005001 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9l2jg\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005021 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhpsk\" (UniqueName: \"kubernetes.io/projected/6154c87e-cea0-46aa-b314-0c22fbaee635-kube-api-access-zhpsk\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005035 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9l2jg\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005052 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rvqv\" (UniqueName: \"kubernetes.io/projected/6c414209-4eca-4639-afb5-80710e5077d1-kube-api-access-8rvqv\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005081 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-node-pullsecrets\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005096 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6154c87e-cea0-46aa-b314-0c22fbaee635-tmpfs\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005153 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwjh5\" (UniqueName: \"kubernetes.io/projected/5b3c2904-e0f2-436a-8172-6639cc9661a9-kube-api-access-cwjh5\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005173 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8t85\" (UniqueName: \"kubernetes.io/projected/8e2973a3-194c-45c5-9679-1d06e941b31a-kube-api-access-f8t85\") pod \"service-ca-9c57cc56f-h5cd4\" (UID: \"8e2973a3-194c-45c5-9679-1d06e941b31a\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005189 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c414209-4eca-4639-afb5-80710e5077d1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005215 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5b3c2904-e0f2-436a-8172-6639cc9661a9-stats-auth\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.005273 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-mountpoint-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: E0127 06:49:56.005338 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:56.505305339 +0000 UTC m=+161.572426602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.008648 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b998caf-1b12-48d4-b9f5-ac76e2920993-serving-cert\") pod \"service-ca-operator-777779d784-s2v7x\" (UID: \"7b998caf-1b12-48d4-b9f5-ac76e2920993\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.008707 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-serving-cert\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.008768 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kt6h\" (UniqueName: \"kubernetes.io/projected/18883629-496c-42b8-8fab-68b3daa9fcad-kube-api-access-8kt6h\") pod \"control-plane-machine-set-operator-78cbb6b69f-cgdkp\" (UID: \"18883629-496c-42b8-8fab-68b3daa9fcad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.008921 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8fba97f-964e-45a8-8265-2c21dcb75903-metrics-tls\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.008998 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8a25b05-abb6-4909-9e7c-7d9f3d3ead56-metrics-tls\") pod \"dns-default-m4pm9\" (UID: \"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56\") " pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.009301 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26gkk\" (UniqueName: \"kubernetes.io/projected/283cce7a-b2b3-4fe9-902a-3c550d7a290d-kube-api-access-26gkk\") pod \"ingress-canary-4g6cc\" (UID: \"283cce7a-b2b3-4fe9-902a-3c550d7a290d\") " pod="openshift-ingress-canary/ingress-canary-4g6cc" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.009338 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlqc8\" (UniqueName: \"kubernetes.io/projected/7b998caf-1b12-48d4-b9f5-ac76e2920993-kube-api-access-zlqc8\") pod \"service-ca-operator-777779d784-s2v7x\" (UID: \"7b998caf-1b12-48d4-b9f5-ac76e2920993\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.009386 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85aba95-efd2-4600-8451-bc92e280b4da-config\") pod \"kube-controller-manager-operator-78b949d7b-kd7hx\" (UID: \"b85aba95-efd2-4600-8451-bc92e280b4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.009436 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6154c87e-cea0-46aa-b314-0c22fbaee635-webhook-cert\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.009885 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-config\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.009925 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5b3c2904-e0f2-436a-8172-6639cc9661a9-default-certificate\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.010050 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85aba95-efd2-4600-8451-bc92e280b4da-config\") pod \"kube-controller-manager-operator-78b949d7b-kd7hx\" (UID: \"b85aba95-efd2-4600-8451-bc92e280b4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.010160 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9spf5\" (UniqueName: \"kubernetes.io/projected/a7b01586-c627-4be0-aa6f-c0942b4d14de-kube-api-access-9spf5\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.011823 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8nm7\" (UniqueName: \"kubernetes.io/projected/b5a04f8d-f2f5-4725-9e9b-8b4152fc67af-kube-api-access-m8nm7\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s4c7\" (UID: \"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.011942 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8fba97f-964e-45a8-8265-2c21dcb75903-trusted-ca\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.012033 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6c414209-4eca-4639-afb5-80710e5077d1-images\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.012172 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-image-import-ca\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.012297 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjd4\" (UniqueName: \"kubernetes.io/projected/6df7b129-e660-446b-9d4d-6f505f6071ad-kube-api-access-zdjd4\") pod \"package-server-manager-789f6589d5-nhsnw\" (UID: \"6df7b129-e660-446b-9d4d-6f505f6071ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.012472 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e2c5da4-8ac5-4e80-b351-feffc47032e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.013984 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-oauth-config\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.014383 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e2c5da4-8ac5-4e80-b351-feffc47032e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.014841 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t5zb\" (UniqueName: \"kubernetes.io/projected/d8fba97f-964e-45a8-8265-2c21dcb75903-kube-api-access-7t5zb\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.014886 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a04f8d-f2f5-4725-9e9b-8b4152fc67af-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s4c7\" (UID: \"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.014921 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c414209-4eca-4639-afb5-80710e5077d1-proxy-tls\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.014972 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/43a6c23c-78b7-4a13-b1a4-efab2dc70130-kube-api-access-nrh97\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015036 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c5821305-00b6-46d4-8ba2-f9930b07a9c0-node-bootstrap-token\") pod \"machine-config-server-bhh2l\" (UID: \"c5821305-00b6-46d4-8ba2-f9930b07a9c0\") " pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015139 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b3c2904-e0f2-436a-8172-6639cc9661a9-service-ca-bundle\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015164 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b3c2904-e0f2-436a-8172-6639cc9661a9-metrics-certs\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015193 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-trusted-ca-bundle\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015252 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-bound-sa-token\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015277 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-socket-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015351 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b85aba95-efd2-4600-8451-bc92e280b4da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kd7hx\" (UID: \"b85aba95-efd2-4600-8451-bc92e280b4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015380 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b85aba95-efd2-4600-8451-bc92e280b4da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kd7hx\" (UID: \"b85aba95-efd2-4600-8451-bc92e280b4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015401 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-oauth-serving-cert\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015460 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b998caf-1b12-48d4-b9f5-ac76e2920993-config\") pod \"service-ca-operator-777779d784-s2v7x\" (UID: \"7b998caf-1b12-48d4-b9f5-ac76e2920993\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015481 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c5821305-00b6-46d4-8ba2-f9930b07a9c0-certs\") pod \"machine-config-server-bhh2l\" (UID: \"c5821305-00b6-46d4-8ba2-f9930b07a9c0\") " pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015503 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8e2973a3-194c-45c5-9679-1d06e941b31a-signing-key\") pod \"service-ca-9c57cc56f-h5cd4\" (UID: \"8e2973a3-194c-45c5-9679-1d06e941b31a\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015525 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnngx\" (UniqueName: \"kubernetes.io/projected/230217b5-781e-408c-816e-13bff539250b-kube-api-access-qnngx\") pod \"collect-profiles-29491605-s9fpg\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015549 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/283cce7a-b2b3-4fe9-902a-3c550d7a290d-cert\") pod \"ingress-canary-4g6cc\" (UID: \"283cce7a-b2b3-4fe9-902a-3c550d7a290d\") " pod="openshift-ingress-canary/ingress-canary-4g6cc" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015658 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-audit\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015710 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7rnv\" (UniqueName: \"kubernetes.io/projected/5c32ef88-558f-48ac-be94-40d911643943-kube-api-access-w7rnv\") pod \"migrator-59844c95c7-fkcx9\" (UID: \"5c32ef88-558f-48ac-be94-40d911643943\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015731 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-audit-dir\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015776 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8a25b05-abb6-4909-9e7c-7d9f3d3ead56-config-volume\") pod \"dns-default-m4pm9\" (UID: \"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56\") " pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015799 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-csi-data-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015878 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-etcd-serving-ca\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015933 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6df7b129-e660-446b-9d4d-6f505f6071ad-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nhsnw\" (UID: \"6df7b129-e660-446b-9d4d-6f505f6071ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.015963 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-trusted-ca\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.016005 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-config\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.016049 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrg8f\" (UniqueName: \"kubernetes.io/projected/e8a25b05-abb6-4909-9e7c-7d9f3d3ead56-kube-api-access-nrg8f\") pod \"dns-default-m4pm9\" (UID: \"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56\") " pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.016132 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7zvh\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-kube-api-access-z7zvh\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.016198 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9pd5\" (UniqueName: \"kubernetes.io/projected/d3ce67a6-c46d-4334-b408-48753b87ea93-kube-api-access-h9pd5\") pod \"marketplace-operator-79b997595-9l2jg\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.016229 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6154c87e-cea0-46aa-b314-0c22fbaee635-apiservice-cert\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.016315 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/18883629-496c-42b8-8fab-68b3daa9fcad-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cgdkp\" (UID: \"18883629-496c-42b8-8fab-68b3daa9fcad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.016379 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-encryption-config\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.016467 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8e2973a3-194c-45c5-9679-1d06e941b31a-signing-cabundle\") pod \"service-ca-9c57cc56f-h5cd4\" (UID: \"8e2973a3-194c-45c5-9679-1d06e941b31a\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.016533 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-certificates\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.016557 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-plugins-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.018433 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8fba97f-964e-45a8-8265-2c21dcb75903-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.019190 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a04f8d-f2f5-4725-9e9b-8b4152fc67af-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s4c7\" (UID: \"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.019335 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-registration-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.019393 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-tls\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.019421 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e2c5da4-8ac5-4e80-b351-feffc47032e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.019466 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.019525 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-service-ca\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.019566 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-etcd-client\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.019590 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-serving-cert\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.019650 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/230217b5-781e-408c-816e-13bff539250b-secret-volume\") pod \"collect-profiles-29491605-s9fpg\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.019751 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/230217b5-781e-408c-816e-13bff539250b-config-volume\") pod \"collect-profiles-29491605-s9fpg\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.024306 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-trusted-ca-bundle\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.024832 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5b3c2904-e0f2-436a-8172-6639cc9661a9-stats-auth\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.025050 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-tls\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.026064 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5b3c2904-e0f2-436a-8172-6639cc9661a9-default-certificate\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.026760 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-oauth-serving-cert\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.028992 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-config\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.029253 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s"] Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.029535 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-service-ca\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.031739 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b3c2904-e0f2-436a-8172-6639cc9661a9-metrics-certs\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.033004 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b3c2904-e0f2-436a-8172-6639cc9661a9-service-ca-bundle\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.033578 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e2c5da4-8ac5-4e80-b351-feffc47032e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.036542 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-serving-cert\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.037679 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-certificates\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.037682 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-trusted-ca\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.037833 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8fba97f-964e-45a8-8265-2c21dcb75903-metrics-tls\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.037896 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8fba97f-964e-45a8-8265-2c21dcb75903-trusted-ca\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.038356 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-oauth-config\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.038486 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b85aba95-efd2-4600-8451-bc92e280b4da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kd7hx\" (UID: \"b85aba95-efd2-4600-8451-bc92e280b4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.055889 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwjh5\" (UniqueName: \"kubernetes.io/projected/5b3c2904-e0f2-436a-8172-6639cc9661a9-kube-api-access-cwjh5\") pod \"router-default-5444994796-nl6sq\" (UID: \"5b3c2904-e0f2-436a-8172-6639cc9661a9\") " pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.078968 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t5zb\" (UniqueName: \"kubernetes.io/projected/d8fba97f-964e-45a8-8265-2c21dcb75903-kube-api-access-7t5zb\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.099848 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8fba97f-964e-45a8-8265-2c21dcb75903-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pwxfp\" (UID: \"d8fba97f-964e-45a8-8265-2c21dcb75903\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120792 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9pd5\" (UniqueName: \"kubernetes.io/projected/d3ce67a6-c46d-4334-b408-48753b87ea93-kube-api-access-h9pd5\") pod \"marketplace-operator-79b997595-9l2jg\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120827 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6154c87e-cea0-46aa-b314-0c22fbaee635-apiservice-cert\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120850 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/18883629-496c-42b8-8fab-68b3daa9fcad-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cgdkp\" (UID: \"18883629-496c-42b8-8fab-68b3daa9fcad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120873 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-encryption-config\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120892 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8e2973a3-194c-45c5-9679-1d06e941b31a-signing-cabundle\") pod \"service-ca-9c57cc56f-h5cd4\" (UID: \"8e2973a3-194c-45c5-9679-1d06e941b31a\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120908 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-plugins-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120933 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a04f8d-f2f5-4725-9e9b-8b4152fc67af-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s4c7\" (UID: \"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120951 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-registration-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120965 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120982 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-etcd-client\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.120996 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-serving-cert\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121012 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/230217b5-781e-408c-816e-13bff539250b-secret-volume\") pod \"collect-profiles-29491605-s9fpg\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121031 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/230217b5-781e-408c-816e-13bff539250b-config-volume\") pod \"collect-profiles-29491605-s9fpg\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121048 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9z4g\" (UniqueName: \"kubernetes.io/projected/c5821305-00b6-46d4-8ba2-f9930b07a9c0-kube-api-access-s9z4g\") pod \"machine-config-server-bhh2l\" (UID: \"c5821305-00b6-46d4-8ba2-f9930b07a9c0\") " pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121065 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdvwh\" (UniqueName: \"kubernetes.io/projected/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-kube-api-access-pdvwh\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121095 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9l2jg\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121111 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhpsk\" (UniqueName: \"kubernetes.io/projected/6154c87e-cea0-46aa-b314-0c22fbaee635-kube-api-access-zhpsk\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121128 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9l2jg\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121143 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rvqv\" (UniqueName: \"kubernetes.io/projected/6c414209-4eca-4639-afb5-80710e5077d1-kube-api-access-8rvqv\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121157 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-node-pullsecrets\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121171 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6154c87e-cea0-46aa-b314-0c22fbaee635-tmpfs\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121189 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8t85\" (UniqueName: \"kubernetes.io/projected/8e2973a3-194c-45c5-9679-1d06e941b31a-kube-api-access-f8t85\") pod \"service-ca-9c57cc56f-h5cd4\" (UID: \"8e2973a3-194c-45c5-9679-1d06e941b31a\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121203 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c414209-4eca-4639-afb5-80710e5077d1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121225 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121242 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-mountpoint-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121267 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kt6h\" (UniqueName: \"kubernetes.io/projected/18883629-496c-42b8-8fab-68b3daa9fcad-kube-api-access-8kt6h\") pod \"control-plane-machine-set-operator-78cbb6b69f-cgdkp\" (UID: \"18883629-496c-42b8-8fab-68b3daa9fcad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121282 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b998caf-1b12-48d4-b9f5-ac76e2920993-serving-cert\") pod \"service-ca-operator-777779d784-s2v7x\" (UID: \"7b998caf-1b12-48d4-b9f5-ac76e2920993\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121298 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8a25b05-abb6-4909-9e7c-7d9f3d3ead56-metrics-tls\") pod \"dns-default-m4pm9\" (UID: \"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56\") " pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121331 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26gkk\" (UniqueName: \"kubernetes.io/projected/283cce7a-b2b3-4fe9-902a-3c550d7a290d-kube-api-access-26gkk\") pod \"ingress-canary-4g6cc\" (UID: \"283cce7a-b2b3-4fe9-902a-3c550d7a290d\") " pod="openshift-ingress-canary/ingress-canary-4g6cc" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121346 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlqc8\" (UniqueName: \"kubernetes.io/projected/7b998caf-1b12-48d4-b9f5-ac76e2920993-kube-api-access-zlqc8\") pod \"service-ca-operator-777779d784-s2v7x\" (UID: \"7b998caf-1b12-48d4-b9f5-ac76e2920993\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121361 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-config\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121375 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6154c87e-cea0-46aa-b314-0c22fbaee635-webhook-cert\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121391 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9spf5\" (UniqueName: \"kubernetes.io/projected/a7b01586-c627-4be0-aa6f-c0942b4d14de-kube-api-access-9spf5\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121408 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8nm7\" (UniqueName: \"kubernetes.io/projected/b5a04f8d-f2f5-4725-9e9b-8b4152fc67af-kube-api-access-m8nm7\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s4c7\" (UID: \"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121423 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6c414209-4eca-4639-afb5-80710e5077d1-images\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121447 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjd4\" (UniqueName: \"kubernetes.io/projected/6df7b129-e660-446b-9d4d-6f505f6071ad-kube-api-access-zdjd4\") pod \"package-server-manager-789f6589d5-nhsnw\" (UID: \"6df7b129-e660-446b-9d4d-6f505f6071ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121460 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-image-import-ca\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121476 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a04f8d-f2f5-4725-9e9b-8b4152fc67af-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s4c7\" (UID: \"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121491 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c414209-4eca-4639-afb5-80710e5077d1-proxy-tls\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121518 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c5821305-00b6-46d4-8ba2-f9930b07a9c0-node-bootstrap-token\") pod \"machine-config-server-bhh2l\" (UID: \"c5821305-00b6-46d4-8ba2-f9930b07a9c0\") " pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121541 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-socket-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121565 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b998caf-1b12-48d4-b9f5-ac76e2920993-config\") pod \"service-ca-operator-777779d784-s2v7x\" (UID: \"7b998caf-1b12-48d4-b9f5-ac76e2920993\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121584 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c5821305-00b6-46d4-8ba2-f9930b07a9c0-certs\") pod \"machine-config-server-bhh2l\" (UID: \"c5821305-00b6-46d4-8ba2-f9930b07a9c0\") " pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121598 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8e2973a3-194c-45c5-9679-1d06e941b31a-signing-key\") pod \"service-ca-9c57cc56f-h5cd4\" (UID: \"8e2973a3-194c-45c5-9679-1d06e941b31a\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121614 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnngx\" (UniqueName: \"kubernetes.io/projected/230217b5-781e-408c-816e-13bff539250b-kube-api-access-qnngx\") pod \"collect-profiles-29491605-s9fpg\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121629 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/283cce7a-b2b3-4fe9-902a-3c550d7a290d-cert\") pod \"ingress-canary-4g6cc\" (UID: \"283cce7a-b2b3-4fe9-902a-3c550d7a290d\") " pod="openshift-ingress-canary/ingress-canary-4g6cc" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121652 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-audit\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121666 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7rnv\" (UniqueName: \"kubernetes.io/projected/5c32ef88-558f-48ac-be94-40d911643943-kube-api-access-w7rnv\") pod \"migrator-59844c95c7-fkcx9\" (UID: \"5c32ef88-558f-48ac-be94-40d911643943\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121680 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-audit-dir\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121695 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8a25b05-abb6-4909-9e7c-7d9f3d3ead56-config-volume\") pod \"dns-default-m4pm9\" (UID: \"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56\") " pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121710 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-csi-data-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121726 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-etcd-serving-ca\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121743 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6df7b129-e660-446b-9d4d-6f505f6071ad-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nhsnw\" (UID: \"6df7b129-e660-446b-9d4d-6f505f6071ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.121759 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrg8f\" (UniqueName: \"kubernetes.io/projected/e8a25b05-abb6-4909-9e7c-7d9f3d3ead56-kube-api-access-nrg8f\") pod \"dns-default-m4pm9\" (UID: \"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56\") " pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.126251 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8a25b05-abb6-4909-9e7c-7d9f3d3ead56-metrics-tls\") pod \"dns-default-m4pm9\" (UID: \"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56\") " pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.129699 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c414209-4eca-4639-afb5-80710e5077d1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: E0127 06:49:56.130040 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:56.630024309 +0000 UTC m=+161.697145572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.130395 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-mountpoint-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.131059 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6154c87e-cea0-46aa-b314-0c22fbaee635-tmpfs\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.131184 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-node-pullsecrets\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.131207 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-config\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.131487 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b998caf-1b12-48d4-b9f5-ac76e2920993-config\") pod \"service-ca-operator-777779d784-s2v7x\" (UID: \"7b998caf-1b12-48d4-b9f5-ac76e2920993\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.132197 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-plugins-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.132766 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-image-import-ca\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.133044 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8e2973a3-194c-45c5-9679-1d06e941b31a-signing-cabundle\") pod \"service-ca-9c57cc56f-h5cd4\" (UID: \"8e2973a3-194c-45c5-9679-1d06e941b31a\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.133034 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9l2jg\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.133544 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a04f8d-f2f5-4725-9e9b-8b4152fc67af-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s4c7\" (UID: \"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.134607 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-registration-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.134856 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-audit-dir\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.135406 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-socket-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.136258 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6c414209-4eca-4639-afb5-80710e5077d1-images\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.136878 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6154c87e-cea0-46aa-b314-0c22fbaee635-apiservice-cert\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.137223 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a7b01586-c627-4be0-aa6f-c0942b4d14de-csi-data-dir\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.137341 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/230217b5-781e-408c-816e-13bff539250b-config-volume\") pod \"collect-profiles-29491605-s9fpg\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.137804 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-audit\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.137850 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8a25b05-abb6-4909-9e7c-7d9f3d3ead56-config-volume\") pod \"dns-default-m4pm9\" (UID: \"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56\") " pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.138448 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-etcd-serving-ca\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.145777 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-etcd-client\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.145916 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b85aba95-efd2-4600-8451-bc92e280b4da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kd7hx\" (UID: \"b85aba95-efd2-4600-8451-bc92e280b4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.149255 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-serving-cert\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.150120 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/18883629-496c-42b8-8fab-68b3daa9fcad-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cgdkp\" (UID: \"18883629-496c-42b8-8fab-68b3daa9fcad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.153487 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/283cce7a-b2b3-4fe9-902a-3c550d7a290d-cert\") pod \"ingress-canary-4g6cc\" (UID: \"283cce7a-b2b3-4fe9-902a-3c550d7a290d\") " pod="openshift-ingress-canary/ingress-canary-4g6cc" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.153534 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b998caf-1b12-48d4-b9f5-ac76e2920993-serving-cert\") pod \"service-ca-operator-777779d784-s2v7x\" (UID: \"7b998caf-1b12-48d4-b9f5-ac76e2920993\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.154208 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9l2jg\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.154572 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a04f8d-f2f5-4725-9e9b-8b4152fc67af-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s4c7\" (UID: \"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.154848 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c414209-4eca-4639-afb5-80710e5077d1-proxy-tls\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.155030 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c5821305-00b6-46d4-8ba2-f9930b07a9c0-certs\") pod \"machine-config-server-bhh2l\" (UID: \"c5821305-00b6-46d4-8ba2-f9930b07a9c0\") " pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.156155 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6154c87e-cea0-46aa-b314-0c22fbaee635-webhook-cert\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.155230 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c5821305-00b6-46d4-8ba2-f9930b07a9c0-node-bootstrap-token\") pod \"machine-config-server-bhh2l\" (UID: \"c5821305-00b6-46d4-8ba2-f9930b07a9c0\") " pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.155651 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6df7b129-e660-446b-9d4d-6f505f6071ad-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nhsnw\" (UID: \"6df7b129-e660-446b-9d4d-6f505f6071ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.156006 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/230217b5-781e-408c-816e-13bff539250b-secret-volume\") pod \"collect-profiles-29491605-s9fpg\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.156111 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.155138 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-encryption-config\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.159239 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8e2973a3-194c-45c5-9679-1d06e941b31a-signing-key\") pod \"service-ca-9c57cc56f-h5cd4\" (UID: \"8e2973a3-194c-45c5-9679-1d06e941b31a\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.168133 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7zvh\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-kube-api-access-z7zvh\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.174642 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/43a6c23c-78b7-4a13-b1a4-efab2dc70130-kube-api-access-nrh97\") pod \"console-f9d7485db-gw87z\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.189001 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.200929 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-bound-sa-token\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.222781 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:56 crc kubenswrapper[4729]: E0127 06:49:56.223352 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:56.723319962 +0000 UTC m=+161.790441215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.229293 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.241878 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.246776 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrg8f\" (UniqueName: \"kubernetes.io/projected/e8a25b05-abb6-4909-9e7c-7d9f3d3ead56-kube-api-access-nrg8f\") pod \"dns-default-m4pm9\" (UID: \"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56\") " pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.259172 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.279243 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9pd5\" (UniqueName: \"kubernetes.io/projected/d3ce67a6-c46d-4334-b408-48753b87ea93-kube-api-access-h9pd5\") pod \"marketplace-operator-79b997595-9l2jg\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.288147 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26gkk\" (UniqueName: \"kubernetes.io/projected/283cce7a-b2b3-4fe9-902a-3c550d7a290d-kube-api-access-26gkk\") pod \"ingress-canary-4g6cc\" (UID: \"283cce7a-b2b3-4fe9-902a-3c550d7a290d\") " pod="openshift-ingress-canary/ingress-canary-4g6cc" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.292573 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlqc8\" (UniqueName: \"kubernetes.io/projected/7b998caf-1b12-48d4-b9f5-ac76e2920993-kube-api-access-zlqc8\") pod \"service-ca-operator-777779d784-s2v7x\" (UID: \"7b998caf-1b12-48d4-b9f5-ac76e2920993\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.311612 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdvwh\" (UniqueName: \"kubernetes.io/projected/b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f-kube-api-access-pdvwh\") pod \"apiserver-76f77b778f-7p9gf\" (UID: \"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f\") " pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.327508 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: E0127 06:49:56.327884 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:56.827868675 +0000 UTC m=+161.894989938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.351523 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.352920 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8t85\" (UniqueName: \"kubernetes.io/projected/8e2973a3-194c-45c5-9679-1d06e941b31a-kube-api-access-f8t85\") pod \"service-ca-9c57cc56f-h5cd4\" (UID: \"8e2973a3-194c-45c5-9679-1d06e941b31a\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.356720 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjd4\" (UniqueName: \"kubernetes.io/projected/6df7b129-e660-446b-9d4d-6f505f6071ad-kube-api-access-zdjd4\") pod \"package-server-manager-789f6589d5-nhsnw\" (UID: \"6df7b129-e660-446b-9d4d-6f505f6071ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.374490 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.376146 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.380332 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kt6h\" (UniqueName: \"kubernetes.io/projected/18883629-496c-42b8-8fab-68b3daa9fcad-kube-api-access-8kt6h\") pod \"control-plane-machine-set-operator-78cbb6b69f-cgdkp\" (UID: \"18883629-496c-42b8-8fab-68b3daa9fcad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.409199 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rvqv\" (UniqueName: \"kubernetes.io/projected/6c414209-4eca-4639-afb5-80710e5077d1-kube-api-access-8rvqv\") pod \"machine-config-operator-74547568cd-l26wg\" (UID: \"6c414209-4eca-4639-afb5-80710e5077d1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.416998 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhpsk\" (UniqueName: \"kubernetes.io/projected/6154c87e-cea0-46aa-b314-0c22fbaee635-kube-api-access-zhpsk\") pod \"packageserver-d55dfcdfc-7k5m2\" (UID: \"6154c87e-cea0-46aa-b314-0c22fbaee635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.417522 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m4pm9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.426312 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.428833 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:56 crc kubenswrapper[4729]: E0127 06:49:56.429284 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:56.92926219 +0000 UTC m=+161.996383453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.432697 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4g6cc" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.447014 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnngx\" (UniqueName: \"kubernetes.io/projected/230217b5-781e-408c-816e-13bff539250b-kube-api-access-qnngx\") pod \"collect-profiles-29491605-s9fpg\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.457609 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.459833 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9spf5\" (UniqueName: \"kubernetes.io/projected/a7b01586-c627-4be0-aa6f-c0942b4d14de-kube-api-access-9spf5\") pod \"csi-hostpathplugin-znw8f\" (UID: \"a7b01586-c627-4be0-aa6f-c0942b4d14de\") " pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.488191 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" event={"ID":"6c9819c6-6d83-4ef1-94bd-038e573864d9","Type":"ContainerStarted","Data":"addbe644ad1771df0a789973d2c3f612a74b635f0edeace76682f13127ca9e99"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.490418 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8nm7\" (UniqueName: \"kubernetes.io/projected/b5a04f8d-f2f5-4725-9e9b-8b4152fc67af-kube-api-access-m8nm7\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s4c7\" (UID: \"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.503729 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9z4g\" (UniqueName: \"kubernetes.io/projected/c5821305-00b6-46d4-8ba2-f9930b07a9c0-kube-api-access-s9z4g\") pod \"machine-config-server-bhh2l\" (UID: \"c5821305-00b6-46d4-8ba2-f9930b07a9c0\") " pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.525422 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" event={"ID":"4810b36b-9b85-4f57-a5f0-3943e80c8386","Type":"ContainerStarted","Data":"29543a5d50e462df3322be754c0b1f7e28e964205da202eade3b36bd7963f67a"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.532108 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: E0127 06:49:56.532482 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:57.032462451 +0000 UTC m=+162.099583714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.545972 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7rnv\" (UniqueName: \"kubernetes.io/projected/5c32ef88-558f-48ac-be94-40d911643943-kube-api-access-w7rnv\") pod \"migrator-59844c95c7-fkcx9\" (UID: \"5c32ef88-558f-48ac-be94-40d911643943\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.552581 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" event={"ID":"c4a19963-7dfa-4c9b-b840-5fe912fcea71","Type":"ContainerStarted","Data":"71b0808f13661a9ca87c05114c02ea96eefa88c337f2074bebbc784b8dde1614"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.584681 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" event={"ID":"859f2a44-973c-4a21-97bd-6ba4ffd8b68a","Type":"ContainerStarted","Data":"a1c7fc325249e9f0dc746198d716a2f1f874dcd6f98a67c844fcf0e4b211b884"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.603364 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.619582 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.620034 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.625666 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.630878 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" event={"ID":"cc0d7d79-ad0c-4a31-861d-45a209142c0e","Type":"ContainerStarted","Data":"9fd621ac633a4a0fe3329fa09e20d5b1b635f85c5f720f8754624cde42027453"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.634566 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.635103 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:56 crc kubenswrapper[4729]: E0127 06:49:56.636313 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:57.136291642 +0000 UTC m=+162.203412915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.643741 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.663869 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" event={"ID":"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4","Type":"ContainerStarted","Data":"6713b85fb51cb559a9da11c0ad0d5b8639e2734f7a2d56f78847b6383e77039f"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.663910 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" event={"ID":"b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4","Type":"ContainerStarted","Data":"0b0d4a7c396ee1d34eff8eafe596b34655c2dab8fa1bf4c48496acbe0a524ff9"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.688676 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bhh2l" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.711883 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-znw8f" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.722410 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" event={"ID":"2bae531e-3aec-4cee-b651-5d04190e91d5","Type":"ContainerStarted","Data":"2f18aa8932a7539bcd32744b34b85c3980127a8da8caeb953531d4ec7e6fcd04"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.722459 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" event={"ID":"2bae531e-3aec-4cee-b651-5d04190e91d5","Type":"ContainerStarted","Data":"0f9b0dcc3d0d335cbf49f965b9c23bf89dc802dbdd7ee2327d11eec7637d241f"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.740698 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: E0127 06:49:56.742704 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:57.242691752 +0000 UTC m=+162.309813015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.746660 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" event={"ID":"c64cf35a-6747-43f9-bde5-c8060518bcda","Type":"ContainerStarted","Data":"433b982c1de85739731644e53044e3d05f315465fbcdf925d0fe17a84b79ac78"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.747464 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.765496 4729 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vzdl2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.765591 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" podUID="c64cf35a-6747-43f9-bde5-c8060518bcda" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.780382 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" event={"ID":"a90ba36b-29b6-4380-bc08-c2c385bebb76","Type":"ContainerStarted","Data":"394b82a3d012a005e44aafed03a4c4ab133f9a6fd109009bd0d2b873d4c02b01"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.815818 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" event={"ID":"5dc71aa3-1214-4e05-b377-c47d7af89214","Type":"ContainerStarted","Data":"7e9f311de8e81d085fe05079bc4cf40d8b0b1f80450c4ef4b0709445c708d3c1"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.833803 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nl6sq" event={"ID":"5b3c2904-e0f2-436a-8172-6639cc9661a9","Type":"ContainerStarted","Data":"75ca595014d85da36d6ff8bf5f43ad84a9b31879e8a27d9491bde2fde754c1d6"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.845092 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:56 crc kubenswrapper[4729]: E0127 06:49:56.846533 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:57.346512582 +0000 UTC m=+162.413633845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.855576 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" event={"ID":"491db327-a9d5-420d-bef2-34193c435226","Type":"ContainerStarted","Data":"d6948453c916e11be5152ed6ac44715fd67103619a4066eb2defcb7f69383525"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.856718 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.874356 4729 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-6fq6s container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.874408 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" podUID="491db327-a9d5-420d-bef2-34193c435226" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.921178 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" event={"ID":"de054efe-cae5-4667-b75b-9b134cef5386","Type":"ContainerStarted","Data":"d12cb818e8e50f6d1c7a0db79ab128868a91f6d7aa311ecf0eea2608c338abbd"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.921224 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" event={"ID":"de054efe-cae5-4667-b75b-9b134cef5386","Type":"ContainerStarted","Data":"c003138c9aad5f9b3ea7d5546c8a0a081ff778469d50357ff277352f7ad0b1b2"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.925802 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gw87z"] Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.936248 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx"] Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.940831 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" event={"ID":"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e","Type":"ContainerStarted","Data":"e7ba5260dc9b8904faa4842c9e00f37d35e5102ce59ec4db0f30ef8c1955d1bd"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.940879 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" event={"ID":"6c63f5a4-a5c6-442a-ac8f-80aaf6f2ca2e","Type":"ContainerStarted","Data":"5bb38aea4714ebba89829879271b4fb30886cb7e1a043e742f817183100dced2"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.947050 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:56 crc kubenswrapper[4729]: E0127 06:49:56.949371 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:57.449350562 +0000 UTC m=+162.516471815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.978918 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" event={"ID":"f2e9569f-7ac6-45ed-bc0f-8d989c015d98","Type":"ContainerStarted","Data":"282c3547b2365f64416d63c9912c606088c092c0daa026a546b0101d6111c209"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.979269 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" event={"ID":"f2e9569f-7ac6-45ed-bc0f-8d989c015d98","Type":"ContainerStarted","Data":"7ce0c02752dbbe32d072ce8547c7db73ee03e3b1320c4651f20c8382bcce89c0"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.983287 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jlbz8" event={"ID":"849b4067-5e9e-4864-912a-d5a7aa747232","Type":"ContainerStarted","Data":"141c364e45425429479b987c58c50e332dbdb109a9c82db4dc2aec995ab31300"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.983334 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jlbz8" event={"ID":"849b4067-5e9e-4864-912a-d5a7aa747232","Type":"ContainerStarted","Data":"b5b0729c7af282f59a56ad056faf8e2847f41ea35bc20de500931c4a42979a29"} Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.983786 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jlbz8" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.986020 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlbz8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.986086 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jlbz8" podUID="849b4067-5e9e-4864-912a-d5a7aa747232" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:49:56 crc kubenswrapper[4729]: I0127 06:49:56.988429 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" event={"ID":"16e432e9-5557-4178-9589-2fdb42148c92","Type":"ContainerStarted","Data":"9ab5ee7a127f766c6efd911d99e2487a4aadfad7ae4d6e9d9b2cfe7becbedefd"} Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.026692 4729 generic.go:334] "Generic (PLEG): container finished" podID="91329e26-4dca-44c0-b703-f555d141e214" containerID="6d2e54bb169b8ee350a953fd6c5c7c55be189840b3501216a5d7d30590fa398a" exitCode=0 Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.026835 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" event={"ID":"91329e26-4dca-44c0-b703-f555d141e214","Type":"ContainerDied","Data":"6d2e54bb169b8ee350a953fd6c5c7c55be189840b3501216a5d7d30590fa398a"} Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.035487 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" event={"ID":"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e","Type":"ContainerStarted","Data":"21ca3d047e25a577fe0d0347a6a7314d1fc8ee5e5328a9da2a9a4d9563989864"} Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.035803 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.036709 4729 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7rp9c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.036767 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" podUID="a5520083-62f9-4b9f-bdc1-ca238d9a4f9e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.045862 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" event={"ID":"8cb9535b-f866-4db2-8d3c-a0d9c8475684","Type":"ContainerStarted","Data":"1fcada83b8fa6c70d0b451619214190ce3a5fc1d36908b919e98a0c83df54b1e"} Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.052231 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-q79v4" event={"ID":"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b","Type":"ContainerStarted","Data":"2db7bc72185e266345ec10332987c53b8c9b177d60d796ecb57077098883564c"} Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.052283 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-q79v4" event={"ID":"5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b","Type":"ContainerStarted","Data":"d9eb280ba7eb6d6ba04d1e2d37fdbb3741f4731d7c2328613ac156a17730a740"} Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.053494 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.057331 4729 patch_prober.go:28] interesting pod/console-operator-58897d9998-q79v4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.057393 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-q79v4" podUID="5b2706a7-fcc4-49e6-a37d-6fd3ac63f18b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.057818 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.059086 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:57.559050245 +0000 UTC m=+162.626171498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.077496 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" event={"ID":"b1fbe809-ef9b-45f9-bbe6-937813005a23","Type":"ContainerStarted","Data":"97f90e787f6fe00a64f4026b3bee04aebbb6ce8da1c0e2158897e3b15fb57eb7"} Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.078284 4729 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lsthv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.078319 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" podUID="b9b62aea-0185-4b23-998b-a210a5612512" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.166011 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.169870 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:57.669849192 +0000 UTC m=+162.736970455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.266470 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x"] Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.288995 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.289217 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:57.789198275 +0000 UTC m=+162.856319538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.289443 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.290382 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:57.790365592 +0000 UTC m=+162.857486855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.391143 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.393177 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:57.892734596 +0000 UTC m=+162.959855859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.459370 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m4pm9"] Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.499602 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.500118 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:58.000104597 +0000 UTC m=+163.067225860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.519655 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7p9gf"] Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.597051 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-q79v4" podStartSLOduration=138.597028863 podStartE2EDuration="2m18.597028863s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:57.568529007 +0000 UTC m=+162.635650290" watchObservedRunningTime="2026-01-27 06:49:57.597028863 +0000 UTC m=+162.664150116" Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.609005 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.614346 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:58.11430209 +0000 UTC m=+163.181423343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.620368 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.620885 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:58.120871605 +0000 UTC m=+163.187992868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.630904 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" podStartSLOduration=138.630877576 podStartE2EDuration="2m18.630877576s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:57.620555165 +0000 UTC m=+162.687676428" watchObservedRunningTime="2026-01-27 06:49:57.630877576 +0000 UTC m=+162.697998839" Jan 27 06:49:57 crc kubenswrapper[4729]: W0127 06:49:57.640274 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8a25b05_abb6_4909_9e7c_7d9f3d3ead56.slice/crio-0f696d2e23db6227f0edcbf8fa890cda57a997e856591802ad1d746dbcd877c9 WatchSource:0}: Error finding container 0f696d2e23db6227f0edcbf8fa890cda57a997e856591802ad1d746dbcd877c9: Status 404 returned error can't find the container with id 0f696d2e23db6227f0edcbf8fa890cda57a997e856591802ad1d746dbcd877c9 Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.717062 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs5fn" podStartSLOduration=138.717031007 podStartE2EDuration="2m18.717031007s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:57.675337519 +0000 UTC m=+162.742458782" watchObservedRunningTime="2026-01-27 06:49:57.717031007 +0000 UTC m=+162.784152270" Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.730189 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp"] Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.730758 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.731501 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:58.231480016 +0000 UTC m=+163.298601279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: W0127 06:49:57.744891 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb80055cd_9c20_4bd9_8cd3_283fdf3bcc6f.slice/crio-86de283b8a09163215a4eb6613b4b45b9038e0fe7d5367fd0c9226e07d901694 WatchSource:0}: Error finding container 86de283b8a09163215a4eb6613b4b45b9038e0fe7d5367fd0c9226e07d901694: Status 404 returned error can't find the container with id 86de283b8a09163215a4eb6613b4b45b9038e0fe7d5367fd0c9226e07d901694 Jan 27 06:49:57 crc kubenswrapper[4729]: W0127 06:49:57.844282 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8fba97f_964e_45a8_8265_2c21dcb75903.slice/crio-c28813e5cf61df1b476eb938bd903276810bd07572a0bddce1e21a11a3fde695 WatchSource:0}: Error finding container c28813e5cf61df1b476eb938bd903276810bd07572a0bddce1e21a11a3fde695: Status 404 returned error can't find the container with id c28813e5cf61df1b476eb938bd903276810bd07572a0bddce1e21a11a3fde695 Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.871850 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.872575 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:58.372559406 +0000 UTC m=+163.439680669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.921988 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9l2jg"] Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.922208 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jlbz8" podStartSLOduration=138.92219593 podStartE2EDuration="2m18.92219593s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:57.801408372 +0000 UTC m=+162.868529635" watchObservedRunningTime="2026-01-27 06:49:57.92219593 +0000 UTC m=+162.989317193" Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.943893 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" podStartSLOduration=138.943875575 podStartE2EDuration="2m18.943875575s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:57.846800674 +0000 UTC m=+162.913921937" watchObservedRunningTime="2026-01-27 06:49:57.943875575 +0000 UTC m=+163.010996838" Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.975888 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:57 crc kubenswrapper[4729]: E0127 06:49:57.976407 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:58.476384046 +0000 UTC m=+163.543505309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:57 crc kubenswrapper[4729]: I0127 06:49:57.993138 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg"] Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.019251 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4g6cc"] Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.021824 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-th46k" podStartSLOduration=139.02180794 podStartE2EDuration="2m19.02180794s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:57.970647978 +0000 UTC m=+163.037769241" watchObservedRunningTime="2026-01-27 06:49:58.02180794 +0000 UTC m=+163.088929203" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.024878 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h5cd4"] Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.047895 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" podStartSLOduration=139.04787733 podStartE2EDuration="2m19.04787733s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:58.029296602 +0000 UTC m=+163.096417875" watchObservedRunningTime="2026-01-27 06:49:58.04787733 +0000 UTC m=+163.114998593" Jan 27 06:49:58 crc kubenswrapper[4729]: E0127 06:49:58.088507 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:58.588489274 +0000 UTC m=+163.655610537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.092404 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.110907 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw"] Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.120476 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8jfj6" podStartSLOduration=139.120453249 podStartE2EDuration="2m19.120453249s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:58.114672809 +0000 UTC m=+163.181794072" watchObservedRunningTime="2026-01-27 06:49:58.120453249 +0000 UTC m=+163.187574502" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.198412 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bhh2l" event={"ID":"c5821305-00b6-46d4-8ba2-f9930b07a9c0","Type":"ContainerStarted","Data":"7a2275f475841a6db37a6e1bd7804c89d20647581eaf21eca0d4ed057ed0f2be"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.198857 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:58 crc kubenswrapper[4729]: E0127 06:49:58.199310 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:58.699293191 +0000 UTC m=+163.766414454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.201784 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d47tg" podStartSLOduration=139.201758828 podStartE2EDuration="2m19.201758828s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:58.197892158 +0000 UTC m=+163.265013411" watchObservedRunningTime="2026-01-27 06:49:58.201758828 +0000 UTC m=+163.268880091" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.263305 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp"] Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.280906 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" podStartSLOduration=139.28088991 podStartE2EDuration="2m19.28088991s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:58.276210395 +0000 UTC m=+163.343331658" watchObservedRunningTime="2026-01-27 06:49:58.28088991 +0000 UTC m=+163.348011173" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.301933 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:58 crc kubenswrapper[4729]: E0127 06:49:58.302373 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:58.802359739 +0000 UTC m=+163.869481002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:58 crc kubenswrapper[4729]: W0127 06:49:58.312098 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e2973a3_194c_45c5_9679_1d06e941b31a.slice/crio-11700617548f50d7d38310e3f762d0b2ae2d95928374d55f38c6793fd32b1356 WatchSource:0}: Error finding container 11700617548f50d7d38310e3f762d0b2ae2d95928374d55f38c6793fd32b1356: Status 404 returned error can't find the container with id 11700617548f50d7d38310e3f762d0b2ae2d95928374d55f38c6793fd32b1356 Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.321815 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" event={"ID":"f2e9569f-7ac6-45ed-bc0f-8d989c015d98","Type":"ContainerStarted","Data":"27029bcf047925ff32270dd0a1386a852015ceece4f05976a8aa3dc958f20f48"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.367401 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" event={"ID":"7b998caf-1b12-48d4-b9f5-ac76e2920993","Type":"ContainerStarted","Data":"946028302babed67cc52a370b6d3e7f8ae961a142a23f855ee15062835a20821"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.404724 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:58 crc kubenswrapper[4729]: E0127 06:49:58.405673 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:58.905651632 +0000 UTC m=+163.972772895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.458316 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2"] Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.458353 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" event={"ID":"a90ba36b-29b6-4380-bc08-c2c385bebb76","Type":"ContainerStarted","Data":"0b254426f42dd6a051ee3464428284b9fe8d4ee8e19ae2e5bef9a120db0a8b68"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.489217 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" event={"ID":"8cb9535b-f866-4db2-8d3c-a0d9c8475684","Type":"ContainerStarted","Data":"3909f7977175badd366f182ae8dbee7b3c49f4d90c0359136cccfc0696c0066d"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.509637 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:58 crc kubenswrapper[4729]: E0127 06:49:58.510057 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:59.01004288 +0000 UTC m=+164.077164143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.518193 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nl6sq" event={"ID":"5b3c2904-e0f2-436a-8172-6639cc9661a9","Type":"ContainerStarted","Data":"40547dfccad024b3439411edc6072c63abcaa14ce06bc49e47e889afb6b259fc"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.519785 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" event={"ID":"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f","Type":"ContainerStarted","Data":"86de283b8a09163215a4eb6613b4b45b9038e0fe7d5367fd0c9226e07d901694"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.520981 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" event={"ID":"b1fbe809-ef9b-45f9-bbe6-937813005a23","Type":"ContainerStarted","Data":"f5d1705f550364a352ded8193e9d8092f399b8721e3f511fd64398af3b8ab032"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.525895 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" event={"ID":"6c9819c6-6d83-4ef1-94bd-038e573864d9","Type":"ContainerStarted","Data":"3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.526643 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.527700 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" event={"ID":"16e432e9-5557-4178-9589-2fdb42148c92","Type":"ContainerStarted","Data":"2856e913f8a3e174e7cf86c8ae8b26f73b1746e14858ce426268c1407cd4923f"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.528165 4729 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fb2x5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.528202 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 27 06:49:58 crc kubenswrapper[4729]: W0127 06:49:58.537361 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6154c87e_cea0_46aa_b314_0c22fbaee635.slice/crio-a0fcc95a21501bf98725025dbe1fbb94f9e3e5886056d56401a182a8bd863718 WatchSource:0}: Error finding container a0fcc95a21501bf98725025dbe1fbb94f9e3e5886056d56401a182a8bd863718: Status 404 returned error can't find the container with id a0fcc95a21501bf98725025dbe1fbb94f9e3e5886056d56401a182a8bd863718 Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.542676 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" event={"ID":"a5520083-62f9-4b9f-bdc1-ca238d9a4f9e","Type":"ContainerStarted","Data":"b5ee54838e3657fba3df0af46b5f2e6ef8744a4247535aa41982374cc5a72c4e"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.547826 4729 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7rp9c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.547889 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" podUID="a5520083-62f9-4b9f-bdc1-ca238d9a4f9e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.557120 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-znw8f"] Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.581676 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7"] Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.596384 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gw87z" event={"ID":"43a6c23c-78b7-4a13-b1a4-efab2dc70130","Type":"ContainerStarted","Data":"c64666a8e048bcb51dd05f577fbd5021891c15939075692049729d84b82767f8"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.621636 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:58 crc kubenswrapper[4729]: E0127 06:49:58.655262 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:59.155231867 +0000 UTC m=+164.222353130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:58 crc kubenswrapper[4729]: W0127 06:49:58.668995 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5a04f8d_f2f5_4725_9e9b_8b4152fc67af.slice/crio-588b7278bc35cf78a0b32e9d640471a0b765f574d7bdc141c873952e200f068e WatchSource:0}: Error finding container 588b7278bc35cf78a0b32e9d640471a0b765f574d7bdc141c873952e200f068e: Status 404 returned error can't find the container with id 588b7278bc35cf78a0b32e9d640471a0b765f574d7bdc141c873952e200f068e Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.686226 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg"] Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.698455 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xcqlf" podStartSLOduration=139.698430441 podStartE2EDuration="2m19.698430441s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:58.621221049 +0000 UTC m=+163.688342312" watchObservedRunningTime="2026-01-27 06:49:58.698430441 +0000 UTC m=+163.765551704" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.708492 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9"] Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.740476 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" event={"ID":"491db327-a9d5-420d-bef2-34193c435226","Type":"ContainerStarted","Data":"33f8a7cb53a7410a85c56086efa118da862cdd312cb0080b1c1f8b68e538592c"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.756881 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:58 crc kubenswrapper[4729]: E0127 06:49:58.757417 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:59.257404456 +0000 UTC m=+164.324525719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.786392 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" event={"ID":"b85aba95-efd2-4600-8451-bc92e280b4da","Type":"ContainerStarted","Data":"214f150875af1392d92795b8e614932055708e1e0f10089a046567f0c9ef490e"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.797185 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" podStartSLOduration=139.797164904 podStartE2EDuration="2m19.797164904s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:58.796664078 +0000 UTC m=+163.863785361" watchObservedRunningTime="2026-01-27 06:49:58.797164904 +0000 UTC m=+163.864286167" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.802058 4729 generic.go:334] "Generic (PLEG): container finished" podID="2bae531e-3aec-4cee-b651-5d04190e91d5" containerID="2f18aa8932a7539bcd32744b34b85c3980127a8da8caeb953531d4ec7e6fcd04" exitCode=0 Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.802651 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" event={"ID":"2bae531e-3aec-4cee-b651-5d04190e91d5","Type":"ContainerDied","Data":"2f18aa8932a7539bcd32744b34b85c3980127a8da8caeb953531d4ec7e6fcd04"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.802681 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.820978 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m4pm9" event={"ID":"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56","Type":"ContainerStarted","Data":"0f696d2e23db6227f0edcbf8fa890cda57a997e856591802ad1d746dbcd877c9"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.823084 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" event={"ID":"c4a19963-7dfa-4c9b-b840-5fe912fcea71","Type":"ContainerStarted","Data":"3099ac0ddb83801bbf4eeaf2b9e158f05b6b2445250eac08e95630aa4a2fc7a2"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.828750 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6fq6s" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.829274 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6ptg" event={"ID":"cc0d7d79-ad0c-4a31-861d-45a209142c0e","Type":"ContainerStarted","Data":"c8d39dd4b6e29ba012518f85bf5df0d5a0001adba39a17889181237b016de6f0"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.830792 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" event={"ID":"230217b5-781e-408c-816e-13bff539250b","Type":"ContainerStarted","Data":"cda3036c4f2d9c3d811d10e636ad168da78826ba872d9b82f7b76af99d5db5b0"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.831766 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" event={"ID":"d3ce67a6-c46d-4334-b408-48753b87ea93","Type":"ContainerStarted","Data":"ff20b02bea4c54cc9e6a354ad19045a0d02bd0a16376bd77e1c7ee0b45f8a223"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.857821 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:58 crc kubenswrapper[4729]: E0127 06:49:58.859117 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:59.35908789 +0000 UTC m=+164.426209153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.877504 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" event={"ID":"d8fba97f-964e-45a8-8265-2c21dcb75903","Type":"ContainerStarted","Data":"c28813e5cf61df1b476eb938bd903276810bd07572a0bddce1e21a11a3fde695"} Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.878758 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlbz8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.878800 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jlbz8" podUID="849b4067-5e9e-4864-912a-d5a7aa747232" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.893805 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.903431 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6mcqw" podStartSLOduration=139.903414959 podStartE2EDuration="2m19.903414959s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:58.903219813 +0000 UTC m=+163.970341076" watchObservedRunningTime="2026-01-27 06:49:58.903414959 +0000 UTC m=+163.970536222" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.906004 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbn5f" podStartSLOduration=139.905992959 podStartE2EDuration="2m19.905992959s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:58.862995581 +0000 UTC m=+163.930116844" watchObservedRunningTime="2026-01-27 06:49:58.905992959 +0000 UTC m=+163.973114222" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.934249 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vhwcv" podStartSLOduration=139.934219728 podStartE2EDuration="2m19.934219728s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:58.927089715 +0000 UTC m=+163.994210978" watchObservedRunningTime="2026-01-27 06:49:58.934219728 +0000 UTC m=+164.001340991" Jan 27 06:49:58 crc kubenswrapper[4729]: I0127 06:49:58.959442 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:58 crc kubenswrapper[4729]: E0127 06:49:58.963931 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:59.463917921 +0000 UTC m=+164.531039184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.060954 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:59 crc kubenswrapper[4729]: E0127 06:49:59.062579 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:59.56255049 +0000 UTC m=+164.629671763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.067414 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" podStartSLOduration=140.067396821 podStartE2EDuration="2m20.067396821s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:59.061557159 +0000 UTC m=+164.128678422" watchObservedRunningTime="2026-01-27 06:49:59.067396821 +0000 UTC m=+164.134518084" Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.154309 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" podStartSLOduration=140.154292274 podStartE2EDuration="2m20.154292274s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:59.153633714 +0000 UTC m=+164.220754987" watchObservedRunningTime="2026-01-27 06:49:59.154292274 +0000 UTC m=+164.221413537" Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.163466 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:59 crc kubenswrapper[4729]: E0127 06:49:59.163811 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:59.663800781 +0000 UTC m=+164.730922044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.230570 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-gw87z" podStartSLOduration=140.230549597 podStartE2EDuration="2m20.230549597s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:59.230163455 +0000 UTC m=+164.297284718" watchObservedRunningTime="2026-01-27 06:49:59.230549597 +0000 UTC m=+164.297670860" Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.244660 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.245156 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.245227 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.273986 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:59 crc kubenswrapper[4729]: E0127 06:49:59.274444 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:59.774430842 +0000 UTC m=+164.841552105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.340457 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zrf48" podStartSLOduration=140.340440817 podStartE2EDuration="2m20.340440817s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:59.273609558 +0000 UTC m=+164.340730821" watchObservedRunningTime="2026-01-27 06:49:59.340440817 +0000 UTC m=+164.407562080" Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.376890 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:59 crc kubenswrapper[4729]: E0127 06:49:59.377374 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:49:59.877360826 +0000 UTC m=+164.944482089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.405816 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" podStartSLOduration=140.4058003 podStartE2EDuration="2m20.4058003s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:59.339540908 +0000 UTC m=+164.406662181" watchObservedRunningTime="2026-01-27 06:49:59.4058003 +0000 UTC m=+164.472921563" Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.457899 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nl6sq" podStartSLOduration=140.457878991 podStartE2EDuration="2m20.457878991s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:59.457595672 +0000 UTC m=+164.524716935" watchObservedRunningTime="2026-01-27 06:49:59.457878991 +0000 UTC m=+164.525000254" Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.478264 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:59 crc kubenswrapper[4729]: E0127 06:49:59.479434 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:49:59.97941475 +0000 UTC m=+165.046536013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.548385 4729 csr.go:261] certificate signing request csr-8cg4m is approved, waiting to be issued Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.572159 4729 csr.go:257] certificate signing request csr-8cg4m is issued Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.605537 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:59 crc kubenswrapper[4729]: E0127 06:49:59.606692 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:00.10667284 +0000 UTC m=+165.173794103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.670465 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-q79v4" Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.711813 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:59 crc kubenswrapper[4729]: E0127 06:49:59.712275 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:00.212258315 +0000 UTC m=+165.279379578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.813696 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:49:59 crc kubenswrapper[4729]: E0127 06:49:59.814102 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:00.314073113 +0000 UTC m=+165.381194366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.890464 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-znw8f" event={"ID":"a7b01586-c627-4be0-aa6f-c0942b4d14de","Type":"ContainerStarted","Data":"887c4ae9baf5d15c076266fa22449370a0d94f909a729398dcb71c31122b5be0"} Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.913053 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" event={"ID":"16e432e9-5557-4178-9589-2fdb42148c92","Type":"ContainerStarted","Data":"bcc9de5d1ab8ed5ff0f1ea860ecca1ea704931328abd0df9c50f53d33ca03022"} Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.914378 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:49:59 crc kubenswrapper[4729]: E0127 06:49:59.914766 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:00.414751616 +0000 UTC m=+165.481872879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.952637 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" event={"ID":"8cb9535b-f866-4db2-8d3c-a0d9c8475684","Type":"ContainerStarted","Data":"4817b4d0b19edb34426f1a688d06b8c3f15af3bbc3ebb202f5eb17f9481278ca"} Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.959343 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" event={"ID":"18883629-496c-42b8-8fab-68b3daa9fcad","Type":"ContainerStarted","Data":"e83c5ea0041d8bef8bbcbaddb418a2019e083819a4605b93146207929072426b"} Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.959379 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" event={"ID":"18883629-496c-42b8-8fab-68b3daa9fcad","Type":"ContainerStarted","Data":"a2efe5a2ea9ec4b0e0f62d1d4af8ac10709f90cfd7ffbadef33339a0f237f8a3"} Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.971085 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" event={"ID":"6df7b129-e660-446b-9d4d-6f505f6071ad","Type":"ContainerStarted","Data":"d64fa8f16dbf05b651d915ad96559d8483ebe79a46d7679906a0c3e6705b0d23"} Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.971133 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" event={"ID":"6df7b129-e660-446b-9d4d-6f505f6071ad","Type":"ContainerStarted","Data":"5adeca6faf6ff0e8760e18e00f5109780b35677500e6ed972d3970955e1d92a2"} Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.972373 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bhh2l" event={"ID":"c5821305-00b6-46d4-8ba2-f9930b07a9c0","Type":"ContainerStarted","Data":"a77bbbc3ff277f667758271bf2cdd2b80d078394f088d5589f8fec2594c676e2"} Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.984649 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nx6kp" podStartSLOduration=140.984622489 podStartE2EDuration="2m20.984622489s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:59.983004479 +0000 UTC m=+165.050125742" watchObservedRunningTime="2026-01-27 06:49:59.984622489 +0000 UTC m=+165.051743752" Jan 27 06:49:59 crc kubenswrapper[4729]: I0127 06:49:59.990075 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" event={"ID":"b85aba95-efd2-4600-8451-bc92e280b4da","Type":"ContainerStarted","Data":"24031e7ac5795dc8995aed5405d8944820ae2c8b7a2477465ccafbee923f86e5"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.000061 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" event={"ID":"230217b5-781e-408c-816e-13bff539250b","Type":"ContainerStarted","Data":"09b1f5012d7f125afe48d4ef36509801dbc7c3f17526ee1edfeb8533c3e707f6"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.004273 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" event={"ID":"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af","Type":"ContainerStarted","Data":"588b7278bc35cf78a0b32e9d640471a0b765f574d7bdc141c873952e200f068e"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.005590 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swdwp" event={"ID":"c4a19963-7dfa-4c9b-b840-5fe912fcea71","Type":"ContainerStarted","Data":"c0c22eaa0286804bab3358524365fe2bf58c9fa3426e2c07f3c9d0dc4846c5f9"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.012271 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" event={"ID":"6c414209-4eca-4639-afb5-80710e5077d1","Type":"ContainerStarted","Data":"e09dfa72527ad03066ae318b1b563334a37e60833cda9b2809ad0b482db08cea"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.018824 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.019884 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:00.519871556 +0000 UTC m=+165.586992819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.021612 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" event={"ID":"8e2973a3-194c-45c5-9679-1d06e941b31a","Type":"ContainerStarted","Data":"11700617548f50d7d38310e3f762d0b2ae2d95928374d55f38c6793fd32b1356"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.035120 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" event={"ID":"2bae531e-3aec-4cee-b651-5d04190e91d5","Type":"ContainerStarted","Data":"dc4f7fc028828e3f1eda2f8e63f285e378f7a2aa3ab190cae3791b3f967c1128"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.046281 4729 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-d9qg8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.046340 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" podUID="2bae531e-3aec-4cee-b651-5d04190e91d5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.047693 4729 generic.go:334] "Generic (PLEG): container finished" podID="b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f" containerID="41a99f58eb3740f65c825dff9e9bdd14bb42aa92e1176fe14112b3b60961bf1f" exitCode=0 Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.047756 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" event={"ID":"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f","Type":"ContainerDied","Data":"41a99f58eb3740f65c825dff9e9bdd14bb42aa92e1176fe14112b3b60961bf1f"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.065351 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" event={"ID":"7b998caf-1b12-48d4-b9f5-ac76e2920993","Type":"ContainerStarted","Data":"ba47b6d6cfc365b1b4f359a3f82e38f029a0740f1486bff5fd66c26ee361cbf5"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.073332 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gw87z" event={"ID":"43a6c23c-78b7-4a13-b1a4-efab2dc70130","Type":"ContainerStarted","Data":"b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.098123 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9" event={"ID":"5c32ef88-558f-48ac-be94-40d911643943","Type":"ContainerStarted","Data":"c72cd52b2f26b3769091a27b9bbccbf646e63c69474dcbfc26e91ec4205961f3"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.120157 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.121276 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:00.62125442 +0000 UTC m=+165.688375683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.125095 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" event={"ID":"6154c87e-cea0-46aa-b314-0c22fbaee635","Type":"ContainerStarted","Data":"a0fcc95a21501bf98725025dbe1fbb94f9e3e5886056d56401a182a8bd863718"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.126303 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bhh2l" podStartSLOduration=7.126291277 podStartE2EDuration="7.126291277s" podCreationTimestamp="2026-01-27 06:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:00.121824888 +0000 UTC m=+165.188946161" watchObservedRunningTime="2026-01-27 06:50:00.126291277 +0000 UTC m=+165.193412530" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.140417 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4g6cc" event={"ID":"283cce7a-b2b3-4fe9-902a-3c550d7a290d","Type":"ContainerStarted","Data":"2f1d524228e23f8a26cde439fd063891bef208a3e4b6ddf8c46967487fa21085"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.160482 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" event={"ID":"91329e26-4dca-44c0-b703-f555d141e214","Type":"ContainerStarted","Data":"524f5445667a37712aec1d165f6717ec1fbe59a8b069f47ec5c5f11499780719"} Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.195891 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7rp9c" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.234218 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.234659 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:00.734647448 +0000 UTC m=+165.801768711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.260790 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:00 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:00 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:00 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.260861 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.279422 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cgdkp" podStartSLOduration=141.2794033 podStartE2EDuration="2m21.2794033s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:00.272903889 +0000 UTC m=+165.340025152" watchObservedRunningTime="2026-01-27 06:50:00.2794033 +0000 UTC m=+165.346524563" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.336151 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.336591 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:00.836572029 +0000 UTC m=+165.903693292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.355262 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-27hdf" podStartSLOduration=141.3552393 podStartE2EDuration="2m21.3552393s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:00.316594508 +0000 UTC m=+165.383715781" watchObservedRunningTime="2026-01-27 06:50:00.3552393 +0000 UTC m=+165.422360573" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.422042 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd7hx" podStartSLOduration=141.422026738 podStartE2EDuration="2m21.422026738s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:00.381678333 +0000 UTC m=+165.448799596" watchObservedRunningTime="2026-01-27 06:50:00.422026738 +0000 UTC m=+165.489148001" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.438695 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.439204 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:00.939192082 +0000 UTC m=+166.006313345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.523649 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s2v7x" podStartSLOduration=141.523630519 podStartE2EDuration="2m21.523630519s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:00.522648788 +0000 UTC m=+165.589770051" watchObservedRunningTime="2026-01-27 06:50:00.523630519 +0000 UTC m=+165.590751782" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.524577 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" podStartSLOduration=141.524572389 podStartE2EDuration="2m21.524572389s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:00.503350528 +0000 UTC m=+165.570471791" watchObservedRunningTime="2026-01-27 06:50:00.524572389 +0000 UTC m=+165.591693652" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.540265 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.540812 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.040793984 +0000 UTC m=+166.107915247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.578419 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 06:44:59 +0000 UTC, rotation deadline is 2026-11-12 20:25:59.707288997 +0000 UTC Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.578467 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6949h35m59.128825411s for next certificate rotation Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.642709 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.643048 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.143034174 +0000 UTC m=+166.210155437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.744073 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.744301 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.244278874 +0000 UTC m=+166.311400137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.744369 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.744873 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.244859503 +0000 UTC m=+166.311980766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.752568 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" podStartSLOduration=141.752551012 podStartE2EDuration="2m21.752551012s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:00.628383279 +0000 UTC m=+165.695504552" watchObservedRunningTime="2026-01-27 06:50:00.752551012 +0000 UTC m=+165.819672275" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.849610 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.850201 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.350181569 +0000 UTC m=+166.417302832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.887850 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.945818 4729 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-d9qg8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.945836 4729 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-d9qg8 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.945876 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" podUID="2bae531e-3aec-4cee-b651-5d04190e91d5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.945892 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" podUID="2bae531e-3aec-4cee-b651-5d04190e91d5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 27 06:50:00 crc kubenswrapper[4729]: I0127 06:50:00.951722 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:00 crc kubenswrapper[4729]: E0127 06:50:00.952060 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.452048249 +0000 UTC m=+166.519169512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.052721 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:01 crc kubenswrapper[4729]: E0127 06:50:01.053093 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.553061872 +0000 UTC m=+166.620183135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.090512 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.090572 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.154168 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:01 crc kubenswrapper[4729]: E0127 06:50:01.154484 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.654472147 +0000 UTC m=+166.721593410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.177632 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" event={"ID":"6c414209-4eca-4639-afb5-80710e5077d1","Type":"ContainerStarted","Data":"6f6510cc39056d85513ca4ed40e8b4092d9fae187304b440b0dfccb51d5ac49c"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.177674 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" event={"ID":"6c414209-4eca-4639-afb5-80710e5077d1","Type":"ContainerStarted","Data":"3eb4b3555a64cfde26cb554a64f3f2658c6d35e5e63c0f30589983bdb3932cc6"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.181900 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" event={"ID":"d8fba97f-964e-45a8-8265-2c21dcb75903","Type":"ContainerStarted","Data":"6ec3129723a708a2cd2da979cdf4563c9cd7d0ee009650492abc2f029fd88d62"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.181940 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" event={"ID":"d8fba97f-964e-45a8-8265-2c21dcb75903","Type":"ContainerStarted","Data":"582ce2a24597f25c55c16f1651f2c6870ddbd1e389d0901689e54218158b4e46"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.204405 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" event={"ID":"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f","Type":"ContainerStarted","Data":"ee745875a11a8aa40b72ee005fe6fa70c5fc5d3148462c197687c17f2ee929aa"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.231524 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zxshj"] Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.234428 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.238122 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" event={"ID":"6df7b129-e660-446b-9d4d-6f505f6071ad","Type":"ContainerStarted","Data":"59536aae0b6ac93920ac2a054557756cc80c5d11f7bd23789c5c290081c7bec2"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.238184 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.251430 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9" event={"ID":"5c32ef88-558f-48ac-be94-40d911643943","Type":"ContainerStarted","Data":"68778ab5797c1b967368f15ac9fcee81063fcfbff258e854313ad1733979a9c6"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.258658 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:01 crc kubenswrapper[4729]: E0127 06:50:01.259771 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.759754343 +0000 UTC m=+166.826875606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.268328 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:01 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:01 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:01 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.268400 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.277145 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.284571 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" event={"ID":"b5a04f8d-f2f5-4725-9e9b-8b4152fc67af","Type":"ContainerStarted","Data":"5da922eed9b91cd5386daf86c96e44b203711b776fac799cb7777c79a97313ce"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.291946 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m4pm9" event={"ID":"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56","Type":"ContainerStarted","Data":"bd82ba6674474cb2a64e644ee7190dc0434b3745775175719a3f0ff53981d00d"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.291986 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m4pm9" event={"ID":"e8a25b05-abb6-4909-9e7c-7d9f3d3ead56","Type":"ContainerStarted","Data":"7de58678d453c1d3c02cd7414ef57fb683a875f8e45c304e8c12c8c0cc9f7593"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.292567 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-m4pm9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.299370 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l26wg" podStartSLOduration=142.299353725 podStartE2EDuration="2m22.299353725s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:01.297780506 +0000 UTC m=+166.364901779" watchObservedRunningTime="2026-01-27 06:50:01.299353725 +0000 UTC m=+166.366474988" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.310369 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" event={"ID":"d3ce67a6-c46d-4334-b408-48753b87ea93","Type":"ContainerStarted","Data":"fa9eaaa3074cc501cd2c8dceeb03bb19b938d45ef3cf9676d2d0f10acb9bdc79"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.310952 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.314174 4729 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9l2jg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.314303 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" podUID="d3ce67a6-c46d-4334-b408-48753b87ea93" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.323668 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4g6cc" event={"ID":"283cce7a-b2b3-4fe9-902a-3c550d7a290d","Type":"ContainerStarted","Data":"7f8b6f8e06636acd5f127b44215e6be16be8bd399f4979757f7c50c74fecee8a"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.335813 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-978h9"] Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.336883 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.337788 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" event={"ID":"6154c87e-cea0-46aa-b314-0c22fbaee635","Type":"ContainerStarted","Data":"39582734f76add820bb51b04378707d643215a1019de77c231846c6caaac2b2a"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.345812 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.344124 4729 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7k5m2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.345953 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" podUID="6154c87e-cea0-46aa-b314-0c22fbaee635" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.347597 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" event={"ID":"8e2973a3-194c-45c5-9679-1d06e941b31a","Type":"ContainerStarted","Data":"6eafda77275e91d1038bcec0a3eaa121adb809c0a6d8e5ae7e74b444e3da2426"} Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.360938 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-utilities\") pod \"certified-operators-zxshj\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.362329 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9fck\" (UniqueName: \"kubernetes.io/projected/bf5b5ac2-9195-41a1-bb76-9017cf05397b-kube-api-access-r9fck\") pod \"certified-operators-zxshj\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.362443 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.362599 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-catalog-content\") pod \"certified-operators-zxshj\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: E0127 06:50:01.362833 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.862821779 +0000 UTC m=+166.929943042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.381148 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zxshj"] Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.389329 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.466588 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.467008 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzddq\" (UniqueName: \"kubernetes.io/projected/633788b3-11e3-447b-91cf-52a9563c052a-kube-api-access-gzddq\") pod \"community-operators-978h9\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.467378 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-utilities\") pod \"certified-operators-zxshj\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.467417 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9fck\" (UniqueName: \"kubernetes.io/projected/bf5b5ac2-9195-41a1-bb76-9017cf05397b-kube-api-access-r9fck\") pod \"certified-operators-zxshj\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.467438 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-catalog-content\") pod \"community-operators-978h9\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.467793 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-utilities\") pod \"community-operators-978h9\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.467971 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-catalog-content\") pod \"certified-operators-zxshj\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: E0127 06:50:01.468573 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:01.96855408 +0000 UTC m=+167.035675343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.470212 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-978h9"] Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.474017 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-catalog-content\") pod \"certified-operators-zxshj\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.498657 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-utilities\") pod \"certified-operators-zxshj\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.572881 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.572947 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-utilities\") pod \"community-operators-978h9\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.573000 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzddq\" (UniqueName: \"kubernetes.io/projected/633788b3-11e3-447b-91cf-52a9563c052a-kube-api-access-gzddq\") pod \"community-operators-978h9\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.573050 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-catalog-content\") pod \"community-operators-978h9\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.573541 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-catalog-content\") pod \"community-operators-978h9\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.573683 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-utilities\") pod \"community-operators-978h9\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: E0127 06:50:01.574024 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.07400886 +0000 UTC m=+167.141130123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.594976 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8w6kb"] Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.596394 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.604340 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s4c7" podStartSLOduration=142.604318723 podStartE2EDuration="2m22.604318723s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:01.593554088 +0000 UTC m=+166.660675361" watchObservedRunningTime="2026-01-27 06:50:01.604318723 +0000 UTC m=+166.671439986" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.605361 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9fck\" (UniqueName: \"kubernetes.io/projected/bf5b5ac2-9195-41a1-bb76-9017cf05397b-kube-api-access-r9fck\") pod \"certified-operators-zxshj\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.676257 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.676542 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-utilities\") pod \"certified-operators-8w6kb\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.676602 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.676623 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-catalog-content\") pod \"certified-operators-8w6kb\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.676686 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7bzm\" (UniqueName: \"kubernetes.io/projected/f34f6802-9269-4d26-abee-6f480d374416-kube-api-access-j7bzm\") pod \"certified-operators-8w6kb\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: E0127 06:50:01.676865 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.17684363 +0000 UTC m=+167.243964893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.698663 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c156b30-d262-4fdc-a70b-eb1703422f01-metrics-certs\") pod \"network-metrics-daemon-xqs5z\" (UID: \"2c156b30-d262-4fdc-a70b-eb1703422f01\") " pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.724751 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w6kb"] Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.751925 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lbbks"] Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.758480 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.761791 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzddq\" (UniqueName: \"kubernetes.io/projected/633788b3-11e3-447b-91cf-52a9563c052a-kube-api-access-gzddq\") pod \"community-operators-978h9\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.779460 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-utilities\") pod \"certified-operators-8w6kb\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.779526 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.779549 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-catalog-content\") pod \"certified-operators-8w6kb\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.779600 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7bzm\" (UniqueName: \"kubernetes.io/projected/f34f6802-9269-4d26-abee-6f480d374416-kube-api-access-j7bzm\") pod \"certified-operators-8w6kb\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.780422 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-utilities\") pod \"certified-operators-8w6kb\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: E0127 06:50:01.780529 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.280512135 +0000 UTC m=+167.347633398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.780698 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-catalog-content\") pod \"certified-operators-8w6kb\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.814631 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" podStartSLOduration=142.814614966 podStartE2EDuration="2m22.814614966s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:01.803928134 +0000 UTC m=+166.871049407" watchObservedRunningTime="2026-01-27 06:50:01.814614966 +0000 UTC m=+166.881736229" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.817262 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbbks"] Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.864703 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7bzm\" (UniqueName: \"kubernetes.io/projected/f34f6802-9269-4d26-abee-6f480d374416-kube-api-access-j7bzm\") pod \"certified-operators-8w6kb\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.878065 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqs5z" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.880342 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.880508 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-utilities\") pod \"community-operators-lbbks\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.880590 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jh62\" (UniqueName: \"kubernetes.io/projected/7ff397c2-8ed9-4073-ae6c-8600c382f227-kube-api-access-9jh62\") pod \"community-operators-lbbks\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.880640 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-catalog-content\") pod \"community-operators-lbbks\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:01 crc kubenswrapper[4729]: E0127 06:50:01.880726 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.380711842 +0000 UTC m=+167.447833105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.890743 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.908394 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-m4pm9" podStartSLOduration=8.908375363 podStartE2EDuration="8.908375363s" podCreationTimestamp="2026-01-27 06:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:01.882603882 +0000 UTC m=+166.949725145" watchObservedRunningTime="2026-01-27 06:50:01.908375363 +0000 UTC m=+166.975496626" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.928774 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.973380 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-978h9" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.982625 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-catalog-content\") pod \"community-operators-lbbks\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.982664 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-utilities\") pod \"community-operators-lbbks\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.982689 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.982752 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jh62\" (UniqueName: \"kubernetes.io/projected/7ff397c2-8ed9-4073-ae6c-8600c382f227-kube-api-access-9jh62\") pod \"community-operators-lbbks\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.983429 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-catalog-content\") pod \"community-operators-lbbks\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:01 crc kubenswrapper[4729]: I0127 06:50:01.983654 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-utilities\") pod \"community-operators-lbbks\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:01 crc kubenswrapper[4729]: E0127 06:50:01.983902 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.483891053 +0000 UTC m=+167.551012306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.044747 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jh62\" (UniqueName: \"kubernetes.io/projected/7ff397c2-8ed9-4073-ae6c-8600c382f227-kube-api-access-9jh62\") pod \"community-operators-lbbks\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.053625 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pwxfp" podStartSLOduration=143.053604942 podStartE2EDuration="2m23.053604942s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:01.999677154 +0000 UTC m=+167.066798427" watchObservedRunningTime="2026-01-27 06:50:02.053604942 +0000 UTC m=+167.120726205" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.084260 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.084652 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.584637698 +0000 UTC m=+167.651758961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.087704 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4g6cc" podStartSLOduration=9.087681072 podStartE2EDuration="9.087681072s" podCreationTimestamp="2026-01-27 06:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:02.080482118 +0000 UTC m=+167.147603381" watchObservedRunningTime="2026-01-27 06:50:02.087681072 +0000 UTC m=+167.154802335" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.091972 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.187506 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.187902 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.68788945 +0000 UTC m=+167.755010713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.248917 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:02 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:02 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:02 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.248985 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.250973 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d9qg8" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.289208 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.289406 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.789378857 +0000 UTC m=+167.856500120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.289506 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.290095 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.79008615 +0000 UTC m=+167.857207413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.299682 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" podStartSLOduration=143.299665028 podStartE2EDuration="2m23.299665028s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:02.263545194 +0000 UTC m=+167.330666467" watchObservedRunningTime="2026-01-27 06:50:02.299665028 +0000 UTC m=+167.366786291" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.373872 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-h5cd4" podStartSLOduration=143.373854836 podStartE2EDuration="2m23.373854836s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:02.303665322 +0000 UTC m=+167.370786585" watchObservedRunningTime="2026-01-27 06:50:02.373854836 +0000 UTC m=+167.440976099" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.380873 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9" event={"ID":"5c32ef88-558f-48ac-be94-40d911643943","Type":"ContainerStarted","Data":"4bb03a660c8410b531889f32ba4561144fb2c9121c9ff00f48c0c7ff8904fc12"} Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.383400 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-znw8f" event={"ID":"a7b01586-c627-4be0-aa6f-c0942b4d14de","Type":"ContainerStarted","Data":"a116e248cef75dd092e868e38a5246ef660d3fafe8bf86513c227f55ce38fe5e"} Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.387698 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" event={"ID":"b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f","Type":"ContainerStarted","Data":"371cbfddfd2344f191b5ba098b0a8e69bd2a36cda192b92a3b17ec17616ccf9b"} Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.388538 4729 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9l2jg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.388589 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" podUID="d3ce67a6-c46d-4334-b408-48753b87ea93" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.389177 4729 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7k5m2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.389227 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" podUID="6154c87e-cea0-46aa-b314-0c22fbaee635" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.390603 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.390724 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.89070196 +0000 UTC m=+167.957823223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.390986 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.391396 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:02.891388882 +0000 UTC m=+167.958510145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.468714 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" podStartSLOduration=143.468692676 podStartE2EDuration="2m23.468692676s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:02.463992631 +0000 UTC m=+167.531113894" watchObservedRunningTime="2026-01-27 06:50:02.468692676 +0000 UTC m=+167.535813939" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.497724 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.510840 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.010819577 +0000 UTC m=+168.077940840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.604228 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fkcx9" podStartSLOduration=143.604205063 podStartE2EDuration="2m23.604205063s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:02.590709973 +0000 UTC m=+167.657831246" watchObservedRunningTime="2026-01-27 06:50:02.604205063 +0000 UTC m=+167.671326326" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.622939 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.623436 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.123423751 +0000 UTC m=+168.190545014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.692139 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" podStartSLOduration=143.692120129 podStartE2EDuration="2m23.692120129s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:02.672566219 +0000 UTC m=+167.739687482" watchObservedRunningTime="2026-01-27 06:50:02.692120129 +0000 UTC m=+167.759241382" Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.726363 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.726716 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.226700254 +0000 UTC m=+168.293821517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.828383 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.828791 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.328779631 +0000 UTC m=+168.395900894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:02 crc kubenswrapper[4729]: I0127 06:50:02.929918 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:02 crc kubenswrapper[4729]: E0127 06:50:02.930328 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.430312669 +0000 UTC m=+168.497433932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.031397 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.031754 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.531741345 +0000 UTC m=+168.598862608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.134765 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.135059 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.635042959 +0000 UTC m=+168.702164222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.237576 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-krr4s"] Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.240046 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.250199 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.750169591 +0000 UTC m=+168.817290854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.258689 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.261094 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.314267 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:03 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:03 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:03 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.314325 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.334071 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krr4s"] Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.343777 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.343882 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.843864347 +0000 UTC m=+168.910985610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.344129 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.344432 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.844423244 +0000 UTC m=+168.911544507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.344461 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-utilities\") pod \"redhat-marketplace-krr4s\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.344568 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhfg5\" (UniqueName: \"kubernetes.io/projected/83578f10-10b1-4953-902d-cf066f164ffe-kube-api-access-dhfg5\") pod \"redhat-marketplace-krr4s\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.344629 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-catalog-content\") pod \"redhat-marketplace-krr4s\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.445959 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.446141 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-catalog-content\") pod \"redhat-marketplace-krr4s\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.446216 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-utilities\") pod \"redhat-marketplace-krr4s\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.446276 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhfg5\" (UniqueName: \"kubernetes.io/projected/83578f10-10b1-4953-902d-cf066f164ffe-kube-api-access-dhfg5\") pod \"redhat-marketplace-krr4s\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.446647 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:03.946611463 +0000 UTC m=+169.013732726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.446967 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-catalog-content\") pod \"redhat-marketplace-krr4s\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.447182 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-utilities\") pod \"redhat-marketplace-krr4s\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.450598 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-znw8f" event={"ID":"a7b01586-c627-4be0-aa6f-c0942b4d14de","Type":"ContainerStarted","Data":"4e24d87296f0f7265ddf8e32076b5d47d43e03cd2a50228b2c43d51d028f8c3c"} Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.491452 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhfg5\" (UniqueName: \"kubernetes.io/projected/83578f10-10b1-4953-902d-cf066f164ffe-kube-api-access-dhfg5\") pod \"redhat-marketplace-krr4s\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.548502 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.549500 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.049487704 +0000 UTC m=+169.116608967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.597856 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.609405 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w6kb"] Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.623303 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wrxcn"] Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.642695 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.653530 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.653707 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.153681186 +0000 UTC m=+169.220802449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.653734 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.654115 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.154101979 +0000 UTC m=+169.221223242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.718448 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.727224 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.740182 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrxcn"] Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.740339 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 06:50:03 crc kubenswrapper[4729]: W0127 06:50:03.753312 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf34f6802_9269_4d26_abee_6f480d374416.slice/crio-bd76cfbdd7d1fb31d8bebdf641a8c97655728dc20fc061a39f105607776b661d WatchSource:0}: Error finding container bd76cfbdd7d1fb31d8bebdf641a8c97655728dc20fc061a39f105607776b661d: Status 404 returned error can't find the container with id bd76cfbdd7d1fb31d8bebdf641a8c97655728dc20fc061a39f105607776b661d Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.754616 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.754997 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-catalog-content\") pod \"redhat-marketplace-wrxcn\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.755037 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-utilities\") pod \"redhat-marketplace-wrxcn\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.755120 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjjns\" (UniqueName: \"kubernetes.io/projected/e500073e-46f7-4fa1-aaf9-f99824cefcc3-kube-api-access-zjjns\") pod \"redhat-marketplace-wrxcn\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.755248 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.255227815 +0000 UTC m=+169.322349078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.765715 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.857420 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-catalog-content\") pod \"redhat-marketplace-wrxcn\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.857461 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-utilities\") pod \"redhat-marketplace-wrxcn\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.857488 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8d720c2f-06c4-4b0e-b521-94c283ff3eb5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.857510 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.857563 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjjns\" (UniqueName: \"kubernetes.io/projected/e500073e-46f7-4fa1-aaf9-f99824cefcc3-kube-api-access-zjjns\") pod \"redhat-marketplace-wrxcn\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.857594 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8d720c2f-06c4-4b0e-b521-94c283ff3eb5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.857988 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-catalog-content\") pod \"redhat-marketplace-wrxcn\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.858202 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-utilities\") pod \"redhat-marketplace-wrxcn\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.858441 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.358428866 +0000 UTC m=+169.425550129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.904457 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.911618 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbbks"] Jan 27 06:50:03 crc kubenswrapper[4729]: W0127 06:50:03.935740 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff397c2_8ed9_4073_ae6c_8600c382f227.slice/crio-6e3b34869de6fd4e214ad90c36664b2952189b920ced85843108dbc4bd97132b WatchSource:0}: Error finding container 6e3b34869de6fd4e214ad90c36664b2952189b920ced85843108dbc4bd97132b: Status 404 returned error can't find the container with id 6e3b34869de6fd4e214ad90c36664b2952189b920ced85843108dbc4bd97132b Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.961690 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.961861 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8d720c2f-06c4-4b0e-b521-94c283ff3eb5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.961945 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8d720c2f-06c4-4b0e-b521-94c283ff3eb5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:50:03 crc kubenswrapper[4729]: I0127 06:50:03.962042 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8d720c2f-06c4-4b0e-b521-94c283ff3eb5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:50:03 crc kubenswrapper[4729]: E0127 06:50:03.962146 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.462130963 +0000 UTC m=+169.529252226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.020412 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xqs5z"] Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.024658 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjjns\" (UniqueName: \"kubernetes.io/projected/e500073e-46f7-4fa1-aaf9-f99824cefcc3-kube-api-access-zjjns\") pod \"redhat-marketplace-wrxcn\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.036563 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zxshj"] Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.038519 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8d720c2f-06c4-4b0e-b521-94c283ff3eb5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.064990 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:04 crc kubenswrapper[4729]: E0127 06:50:04.065327 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.565313813 +0000 UTC m=+169.632435076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.084508 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.171072 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:04 crc kubenswrapper[4729]: E0127 06:50:04.171734 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.671715634 +0000 UTC m=+169.738836887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.191357 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-978h9"] Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.247203 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:04 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:04 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:04 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.247260 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.272435 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:04 crc kubenswrapper[4729]: E0127 06:50:04.272806 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.772793899 +0000 UTC m=+169.839915162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:04 crc kubenswrapper[4729]: W0127 06:50:04.276935 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod633788b3_11e3_447b_91cf_52a9563c052a.slice/crio-f39589c9d76a9c32ddf0359af6f2b30f3980c606cd46f67336bdfb5b9ab6344b WatchSource:0}: Error finding container f39589c9d76a9c32ddf0359af6f2b30f3980c606cd46f67336bdfb5b9ab6344b: Status 404 returned error can't find the container with id f39589c9d76a9c32ddf0359af6f2b30f3980c606cd46f67336bdfb5b9ab6344b Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.296395 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.313153 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gv9zq"] Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.340355 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.354705 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.373099 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.373238 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6zvs\" (UniqueName: \"kubernetes.io/projected/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-kube-api-access-p6zvs\") pod \"redhat-operators-gv9zq\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.373292 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-utilities\") pod \"redhat-operators-gv9zq\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.373363 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-catalog-content\") pod \"redhat-operators-gv9zq\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: E0127 06:50:04.373462 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.87344792 +0000 UTC m=+169.940569183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.413223 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gv9zq"] Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.477815 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6zvs\" (UniqueName: \"kubernetes.io/projected/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-kube-api-access-p6zvs\") pod \"redhat-operators-gv9zq\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.477881 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-utilities\") pod \"redhat-operators-gv9zq\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.477945 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.477963 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-catalog-content\") pod \"redhat-operators-gv9zq\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.478372 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-catalog-content\") pod \"redhat-operators-gv9zq\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.478834 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-utilities\") pod \"redhat-operators-gv9zq\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: E0127 06:50:04.479098 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:04.979072746 +0000 UTC m=+170.046194009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.499664 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbbks" event={"ID":"7ff397c2-8ed9-4073-ae6c-8600c382f227","Type":"ContainerStarted","Data":"76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7"} Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.499708 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbbks" event={"ID":"7ff397c2-8ed9-4073-ae6c-8600c382f227","Type":"ContainerStarted","Data":"6e3b34869de6fd4e214ad90c36664b2952189b920ced85843108dbc4bd97132b"} Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.510686 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxshj" event={"ID":"bf5b5ac2-9195-41a1-bb76-9017cf05397b","Type":"ContainerStarted","Data":"d4cbc3558c1c6e54d27ff134062bd1cda66333f37361c90b22a846aa459961be"} Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.535461 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" event={"ID":"2c156b30-d262-4fdc-a70b-eb1703422f01","Type":"ContainerStarted","Data":"d6eaa6cf30aaf9e36ebc28e1f01867600191c40234ec9f43af61687e7b5108c3"} Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.550237 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-978h9" event={"ID":"633788b3-11e3-447b-91cf-52a9563c052a","Type":"ContainerStarted","Data":"f39589c9d76a9c32ddf0359af6f2b30f3980c606cd46f67336bdfb5b9ab6344b"} Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.551008 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6zvs\" (UniqueName: \"kubernetes.io/projected/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-kube-api-access-p6zvs\") pod \"redhat-operators-gv9zq\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.586864 4729 generic.go:334] "Generic (PLEG): container finished" podID="f34f6802-9269-4d26-abee-6f480d374416" containerID="4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d" exitCode=0 Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.587207 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.588015 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6kb" event={"ID":"f34f6802-9269-4d26-abee-6f480d374416","Type":"ContainerDied","Data":"4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d"} Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.588046 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6kb" event={"ID":"f34f6802-9269-4d26-abee-6f480d374416","Type":"ContainerStarted","Data":"bd76cfbdd7d1fb31d8bebdf641a8c97655728dc20fc061a39f105607776b661d"} Jan 27 06:50:04 crc kubenswrapper[4729]: E0127 06:50:04.588137 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:05.088120529 +0000 UTC m=+170.155241792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.611432 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.611716 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.619312 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-znw8f" event={"ID":"a7b01586-c627-4be0-aa6f-c0942b4d14de","Type":"ContainerStarted","Data":"ab1508dcb1373278eb326809e58207d5fa3d93b92fda073aab7b5a072c943580"} Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.688651 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:04 crc kubenswrapper[4729]: E0127 06:50:04.690108 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:05.190091672 +0000 UTC m=+170.257212935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.722525 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vmbbk"] Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.723658 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.746929 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.789636 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.789859 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-utilities\") pod \"redhat-operators-vmbbk\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.789894 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wk55\" (UniqueName: \"kubernetes.io/projected/ef199f3e-dc03-4397-91ce-9da605f06991-kube-api-access-8wk55\") pod \"redhat-operators-vmbbk\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.789960 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-catalog-content\") pod \"redhat-operators-vmbbk\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:04 crc kubenswrapper[4729]: E0127 06:50:04.790120 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:05.290104174 +0000 UTC m=+170.357225427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.878816 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmbbk"] Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.894342 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-catalog-content\") pod \"redhat-operators-vmbbk\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.894392 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.894439 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-utilities\") pod \"redhat-operators-vmbbk\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.894448 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.895117 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-catalog-content\") pod \"redhat-operators-vmbbk\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.895333 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.894469 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wk55\" (UniqueName: \"kubernetes.io/projected/ef199f3e-dc03-4397-91ce-9da605f06991-kube-api-access-8wk55\") pod \"redhat-operators-vmbbk\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:04 crc kubenswrapper[4729]: E0127 06:50:04.895393 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:05.39538068 +0000 UTC m=+170.462501933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.895665 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-utilities\") pod \"redhat-operators-vmbbk\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.900212 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krr4s"] Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.931300 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:50:04 crc kubenswrapper[4729]: I0127 06:50:04.983986 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wk55\" (UniqueName: \"kubernetes.io/projected/ef199f3e-dc03-4397-91ce-9da605f06991-kube-api-access-8wk55\") pod \"redhat-operators-vmbbk\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:04.997569 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:05 crc kubenswrapper[4729]: E0127 06:50:04.998501 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:05.498484137 +0000 UTC m=+170.565605400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:05 crc kubenswrapper[4729]: W0127 06:50:05.017406 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83578f10_10b1_4953_902d_cf066f164ffe.slice/crio-df0b77d99905d465197b14eac023f3cdee730bbb43f983ca141e1baab5aa312a WatchSource:0}: Error finding container df0b77d99905d465197b14eac023f3cdee730bbb43f983ca141e1baab5aa312a: Status 404 returned error can't find the container with id df0b77d99905d465197b14eac023f3cdee730bbb43f983ca141e1baab5aa312a Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.029962 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlbz8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.030004 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jlbz8" podUID="849b4067-5e9e-4864-912a-d5a7aa747232" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.030150 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlbz8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.030212 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jlbz8" podUID="849b4067-5e9e-4864-912a-d5a7aa747232" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.062022 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.099095 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:05 crc kubenswrapper[4729]: E0127 06:50:05.099710 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:05.599694627 +0000 UTC m=+170.666815900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.203146 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:05 crc kubenswrapper[4729]: E0127 06:50:05.203491 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:05.703474815 +0000 UTC m=+170.770596078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.256260 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:05 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:05 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:05 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.256319 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.306979 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:05 crc kubenswrapper[4729]: E0127 06:50:05.307383 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:05.807370847 +0000 UTC m=+170.874492110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.421764 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:05 crc kubenswrapper[4729]: E0127 06:50:05.423293 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:05.923275684 +0000 UTC m=+170.990396947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.530914 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:05 crc kubenswrapper[4729]: E0127 06:50:05.531304 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:06.031288564 +0000 UTC m=+171.098409827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.632254 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:05 crc kubenswrapper[4729]: E0127 06:50:05.632641 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:06.132620827 +0000 UTC m=+171.199742090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.658240 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" event={"ID":"2c156b30-d262-4fdc-a70b-eb1703422f01","Type":"ContainerStarted","Data":"106478a7a953cdc6035b88ba53f98d9f53794f51a0679a0b1dbf8e13aa6b9b4e"} Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.680686 4729 generic.go:334] "Generic (PLEG): container finished" podID="633788b3-11e3-447b-91cf-52a9563c052a" containerID="e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920" exitCode=0 Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.680811 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-978h9" event={"ID":"633788b3-11e3-447b-91cf-52a9563c052a","Type":"ContainerDied","Data":"e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920"} Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.695923 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.720761 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krr4s" event={"ID":"83578f10-10b1-4953-902d-cf066f164ffe","Type":"ContainerStarted","Data":"b730cfb804bc4e316c6e5eca6ef9b8f22e8ad50592955dd0cf2f84927aceeba1"} Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.721236 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krr4s" event={"ID":"83578f10-10b1-4953-902d-cf066f164ffe","Type":"ContainerStarted","Data":"df0b77d99905d465197b14eac023f3cdee730bbb43f983ca141e1baab5aa312a"} Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.750871 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:05 crc kubenswrapper[4729]: E0127 06:50:05.751872 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:06.251858438 +0000 UTC m=+171.318979691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.768399 4729 generic.go:334] "Generic (PLEG): container finished" podID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerID="76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7" exitCode=0 Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.768531 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbbks" event={"ID":"7ff397c2-8ed9-4073-ae6c-8600c382f227","Type":"ContainerDied","Data":"76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7"} Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.823881 4729 generic.go:334] "Generic (PLEG): container finished" podID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerID="8d3c03b5c905f7999107cf9f39a906b69472ca1ee1b3b4ec57fd117fb45388b9" exitCode=0 Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.826039 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxshj" event={"ID":"bf5b5ac2-9195-41a1-bb76-9017cf05397b","Type":"ContainerDied","Data":"8d3c03b5c905f7999107cf9f39a906b69472ca1ee1b3b4ec57fd117fb45388b9"} Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.839114 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4zb87" Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.853760 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:05 crc kubenswrapper[4729]: E0127 06:50:05.854880 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:06.354844372 +0000 UTC m=+171.421965635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.914087 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrxcn"] Jan 27 06:50:05 crc kubenswrapper[4729]: I0127 06:50:05.955826 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:05 crc kubenswrapper[4729]: E0127 06:50:05.956711 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:06.45669881 +0000 UTC m=+171.523820073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.060564 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:06 crc kubenswrapper[4729]: E0127 06:50:06.061486 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:06.56146575 +0000 UTC m=+171.628587013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.164315 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:06 crc kubenswrapper[4729]: E0127 06:50:06.164884 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:06.664869287 +0000 UTC m=+171.731990550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.191418 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.192206 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.193002 4729 patch_prober.go:28] interesting pod/console-f9d7485db-gw87z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.193041 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gw87z" podUID="43a6c23c-78b7-4a13-b1a4-efab2dc70130" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.223739 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmbbk"] Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.242337 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.245619 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:06 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:06 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:06 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.245666 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.270387 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:06 crc kubenswrapper[4729]: E0127 06:50:06.271583 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:06.771568787 +0000 UTC m=+171.838690050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.313624 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gv9zq"] Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.382566 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:06 crc kubenswrapper[4729]: E0127 06:50:06.383685 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:06.883661535 +0000 UTC m=+171.950782798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.408437 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.427280 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.427635 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.450001 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.450898 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.462677 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.466861 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.468227 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.484309 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:06 crc kubenswrapper[4729]: E0127 06:50:06.486065 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:06.98603995 +0000 UTC m=+172.053161213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.586724 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/529c6ab5-dcb4-4993-a063-07768425e6a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"529c6ab5-dcb4-4993-a063-07768425e6a0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.587059 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.587169 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/529c6ab5-dcb4-4993-a063-07768425e6a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"529c6ab5-dcb4-4993-a063-07768425e6a0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:50:06 crc kubenswrapper[4729]: E0127 06:50:06.587487 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:07.087473376 +0000 UTC m=+172.154594639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.661423 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7k5m2" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.690881 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:06 crc kubenswrapper[4729]: E0127 06:50:06.690997 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:07.190980366 +0000 UTC m=+172.258101629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.691270 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/529c6ab5-dcb4-4993-a063-07768425e6a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"529c6ab5-dcb4-4993-a063-07768425e6a0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.691330 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/529c6ab5-dcb4-4993-a063-07768425e6a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"529c6ab5-dcb4-4993-a063-07768425e6a0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.691368 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:06 crc kubenswrapper[4729]: E0127 06:50:06.691693 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:07.191686198 +0000 UTC m=+172.258807461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.691867 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/529c6ab5-dcb4-4993-a063-07768425e6a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"529c6ab5-dcb4-4993-a063-07768425e6a0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.713980 4729 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.774275 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/529c6ab5-dcb4-4993-a063-07768425e6a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"529c6ab5-dcb4-4993-a063-07768425e6a0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.792173 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:06 crc kubenswrapper[4729]: E0127 06:50:06.793404 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:07.293381243 +0000 UTC m=+172.360502506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.823937 4729 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7p9gf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]log ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]etcd ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]poststarthook/max-in-flight-filter ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 06:50:06 crc kubenswrapper[4729]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 06:50:06 crc kubenswrapper[4729]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 27 06:50:06 crc kubenswrapper[4729]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]poststarthook/openshift.io-startinformers ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 06:50:06 crc kubenswrapper[4729]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 06:50:06 crc kubenswrapper[4729]: livez check failed Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.824011 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" podUID="b80055cd-9c20-4bd9-8cd3-283fdf3bcc6f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.853018 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9zq" event={"ID":"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1","Type":"ContainerStarted","Data":"15e4b6957383ce5d182305f6bc940c89ea3c95f75c523ab1df3ff65b164ef2bd"} Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.854517 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8d720c2f-06c4-4b0e-b521-94c283ff3eb5","Type":"ContainerStarted","Data":"60c61abd099fef78d57774a439613049f47b49f44ad9e683c81ee5ab5f7751e0"} Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.854540 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8d720c2f-06c4-4b0e-b521-94c283ff3eb5","Type":"ContainerStarted","Data":"62ce6a6825e30d2cd7d1c34b1c37021c95736148d96fd4c680557bad8f363d15"} Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.855428 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.886130 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-znw8f" event={"ID":"a7b01586-c627-4be0-aa6f-c0942b4d14de","Type":"ContainerStarted","Data":"a085ae3a5c6a171c98f39e3faa2a6f367073266d37a4f5e45714df31a927116c"} Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.894965 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:06 crc kubenswrapper[4729]: E0127 06:50:06.895442 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:07.395424837 +0000 UTC m=+172.462546100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.907281 4729 generic.go:334] "Generic (PLEG): container finished" podID="83578f10-10b1-4953-902d-cf066f164ffe" containerID="b730cfb804bc4e316c6e5eca6ef9b8f22e8ad50592955dd0cf2f84927aceeba1" exitCode=0 Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.907419 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krr4s" event={"ID":"83578f10-10b1-4953-902d-cf066f164ffe","Type":"ContainerDied","Data":"b730cfb804bc4e316c6e5eca6ef9b8f22e8ad50592955dd0cf2f84927aceeba1"} Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.923849 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmbbk" event={"ID":"ef199f3e-dc03-4397-91ce-9da605f06991","Type":"ContainerStarted","Data":"0a4607043bbd666cf2865761002088e1d0c377bb619b4f821a053e87d949628c"} Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.923904 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmbbk" event={"ID":"ef199f3e-dc03-4397-91ce-9da605f06991","Type":"ContainerStarted","Data":"4acb185e41834d8812e96e916178d07fe1048964a71dc09f90ca4eccd35002fc"} Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.962877 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xqs5z" event={"ID":"2c156b30-d262-4fdc-a70b-eb1703422f01","Type":"ContainerStarted","Data":"1f296163545922f0daabd9506c4a58c8434d76bc2ea98d08508d12ab1c0a190d"} Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.976481 4729 generic.go:334] "Generic (PLEG): container finished" podID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerID="e59edaae43ba74e54946e7b1ef26895a612251340f7728304d6b2270398b4536" exitCode=0 Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.977285 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrxcn" event={"ID":"e500073e-46f7-4fa1-aaf9-f99824cefcc3","Type":"ContainerDied","Data":"e59edaae43ba74e54946e7b1ef26895a612251340f7728304d6b2270398b4536"} Jan 27 06:50:06 crc kubenswrapper[4729]: I0127 06:50:06.977314 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrxcn" event={"ID":"e500073e-46f7-4fa1-aaf9-f99824cefcc3","Type":"ContainerStarted","Data":"ead6b192054b298a632f4a08a445ac7ab8e064596df9d5e010a88375b2f59578"} Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.005208 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:07 crc kubenswrapper[4729]: E0127 06:50:07.006649 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:07.506623787 +0000 UTC m=+172.573745060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.107064 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:07 crc kubenswrapper[4729]: E0127 06:50:07.110248 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:07.61023414 +0000 UTC m=+172.677355393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.117371 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-znw8f" podStartSLOduration=14.117354452 podStartE2EDuration="14.117354452s" podCreationTimestamp="2026-01-27 06:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:07.003579442 +0000 UTC m=+172.070700695" watchObservedRunningTime="2026-01-27 06:50:07.117354452 +0000 UTC m=+172.184475715" Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.208254 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:07 crc kubenswrapper[4729]: E0127 06:50:07.208626 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:07.708609861 +0000 UTC m=+172.775731124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.247930 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:07 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:07 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:07 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.247984 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.310102 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:07 crc kubenswrapper[4729]: E0127 06:50:07.310565 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:07.810552823 +0000 UTC m=+172.877674086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.342735 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.342713984 podStartE2EDuration="4.342713984s" podCreationTimestamp="2026-01-27 06:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:07.234373183 +0000 UTC m=+172.301494446" watchObservedRunningTime="2026-01-27 06:50:07.342713984 +0000 UTC m=+172.409835247" Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.398961 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xqs5z" podStartSLOduration=148.398943253 podStartE2EDuration="2m28.398943253s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:07.396468016 +0000 UTC m=+172.463589279" watchObservedRunningTime="2026-01-27 06:50:07.398943253 +0000 UTC m=+172.466064506" Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.411317 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:07 crc kubenswrapper[4729]: E0127 06:50:07.411736 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:07.911705181 +0000 UTC m=+172.978826434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.513991 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:07 crc kubenswrapper[4729]: E0127 06:50:07.514360 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:50:08.014347344 +0000 UTC m=+173.081468607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxkmt" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.615504 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:07 crc kubenswrapper[4729]: E0127 06:50:07.616033 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:50:08.116015958 +0000 UTC m=+173.183137211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.631030 4729 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T06:50:06.714002543Z","Handler":null,"Name":""} Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.673605 4729 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.673683 4729 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.718080 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.778044 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.778200 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.811989 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.838478 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxkmt\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.879307 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.921111 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:50:07 crc kubenswrapper[4729]: I0127 06:50:07.953877 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:07.996656 4729 generic.go:334] "Generic (PLEG): container finished" podID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerID="4f3d68713bab7066eb751546cfc64759b5fecb14c62e315a281846ed03354813" exitCode=0 Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:07.996741 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9zq" event={"ID":"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1","Type":"ContainerDied","Data":"4f3d68713bab7066eb751546cfc64759b5fecb14c62e315a281846ed03354813"} Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:08.034728 4729 generic.go:334] "Generic (PLEG): container finished" podID="8d720c2f-06c4-4b0e-b521-94c283ff3eb5" containerID="60c61abd099fef78d57774a439613049f47b49f44ad9e683c81ee5ab5f7751e0" exitCode=0 Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:08.034913 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8d720c2f-06c4-4b0e-b521-94c283ff3eb5","Type":"ContainerDied","Data":"60c61abd099fef78d57774a439613049f47b49f44ad9e683c81ee5ab5f7751e0"} Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:08.036365 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"529c6ab5-dcb4-4993-a063-07768425e6a0","Type":"ContainerStarted","Data":"a4708c1a84019a2a3ecf4bc5f4da35e6a0ebe62eca03d34c905cfd4bd5d5480e"} Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:08.037402 4729 generic.go:334] "Generic (PLEG): container finished" podID="ef199f3e-dc03-4397-91ce-9da605f06991" containerID="0a4607043bbd666cf2865761002088e1d0c377bb619b4f821a053e87d949628c" exitCode=0 Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:08.038147 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmbbk" event={"ID":"ef199f3e-dc03-4397-91ce-9da605f06991","Type":"ContainerDied","Data":"0a4607043bbd666cf2865761002088e1d0c377bb619b4f821a053e87d949628c"} Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:08.259763 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:08 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:08 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:08 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:08.260006 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:08.409167 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 06:50:08 crc kubenswrapper[4729]: I0127 06:50:08.514681 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxkmt"] Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.049487 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" event={"ID":"1e2c5da4-8ac5-4e80-b351-feffc47032e6","Type":"ContainerStarted","Data":"2c761a807db84fc14dd856cb41d074935470bc408581340a40589eccf6b7fa9d"} Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.060207 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"529c6ab5-dcb4-4993-a063-07768425e6a0","Type":"ContainerStarted","Data":"a88d7ed1ebe807d52b52ed1751aed0ee10e8f760bac343ed21c9ac259590dabf"} Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.257056 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:09 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:09 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:09 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.257393 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.702097 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.723811 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.723786497 podStartE2EDuration="3.723786497s" podCreationTimestamp="2026-01-27 06:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:09.084639721 +0000 UTC m=+174.151760984" watchObservedRunningTime="2026-01-27 06:50:09.723786497 +0000 UTC m=+174.790907760" Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.769524 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kube-api-access\") pod \"8d720c2f-06c4-4b0e-b521-94c283ff3eb5\" (UID: \"8d720c2f-06c4-4b0e-b521-94c283ff3eb5\") " Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.769755 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kubelet-dir\") pod \"8d720c2f-06c4-4b0e-b521-94c283ff3eb5\" (UID: \"8d720c2f-06c4-4b0e-b521-94c283ff3eb5\") " Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.770281 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8d720c2f-06c4-4b0e-b521-94c283ff3eb5" (UID: "8d720c2f-06c4-4b0e-b521-94c283ff3eb5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.778940 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8d720c2f-06c4-4b0e-b521-94c283ff3eb5" (UID: "8d720c2f-06c4-4b0e-b521-94c283ff3eb5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.871854 4729 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:09 crc kubenswrapper[4729]: I0127 06:50:09.871888 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d720c2f-06c4-4b0e-b521-94c283ff3eb5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.069729 4729 generic.go:334] "Generic (PLEG): container finished" podID="230217b5-781e-408c-816e-13bff539250b" containerID="09b1f5012d7f125afe48d4ef36509801dbc7c3f17526ee1edfeb8533c3e707f6" exitCode=0 Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.070147 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" event={"ID":"230217b5-781e-408c-816e-13bff539250b","Type":"ContainerDied","Data":"09b1f5012d7f125afe48d4ef36509801dbc7c3f17526ee1edfeb8533c3e707f6"} Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.087947 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" event={"ID":"1e2c5da4-8ac5-4e80-b351-feffc47032e6","Type":"ContainerStarted","Data":"6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f"} Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.101949 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8d720c2f-06c4-4b0e-b521-94c283ff3eb5","Type":"ContainerDied","Data":"62ce6a6825e30d2cd7d1c34b1c37021c95736148d96fd4c680557bad8f363d15"} Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.101997 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62ce6a6825e30d2cd7d1c34b1c37021c95736148d96fd4c680557bad8f363d15" Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.102057 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.109749 4729 generic.go:334] "Generic (PLEG): container finished" podID="529c6ab5-dcb4-4993-a063-07768425e6a0" containerID="a88d7ed1ebe807d52b52ed1751aed0ee10e8f760bac343ed21c9ac259590dabf" exitCode=0 Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.109786 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"529c6ab5-dcb4-4993-a063-07768425e6a0","Type":"ContainerDied","Data":"a88d7ed1ebe807d52b52ed1751aed0ee10e8f760bac343ed21c9ac259590dabf"} Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.146230 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" podStartSLOduration=151.14621364 podStartE2EDuration="2m31.14621364s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:10.124100142 +0000 UTC m=+175.191221415" watchObservedRunningTime="2026-01-27 06:50:10.14621364 +0000 UTC m=+175.213334903" Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.247702 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:10 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:10 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:10 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:10 crc kubenswrapper[4729]: I0127 06:50:10.247771 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.122477 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.245244 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:11 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:11 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:11 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.245299 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.421967 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-m4pm9" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.433601 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.449510 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7p9gf" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.568182 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.616742 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/230217b5-781e-408c-816e-13bff539250b-secret-volume\") pod \"230217b5-781e-408c-816e-13bff539250b\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.616901 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnngx\" (UniqueName: \"kubernetes.io/projected/230217b5-781e-408c-816e-13bff539250b-kube-api-access-qnngx\") pod \"230217b5-781e-408c-816e-13bff539250b\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.616954 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/230217b5-781e-408c-816e-13bff539250b-config-volume\") pod \"230217b5-781e-408c-816e-13bff539250b\" (UID: \"230217b5-781e-408c-816e-13bff539250b\") " Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.628132 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230217b5-781e-408c-816e-13bff539250b-config-volume" (OuterVolumeSpecName: "config-volume") pod "230217b5-781e-408c-816e-13bff539250b" (UID: "230217b5-781e-408c-816e-13bff539250b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.667242 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230217b5-781e-408c-816e-13bff539250b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "230217b5-781e-408c-816e-13bff539250b" (UID: "230217b5-781e-408c-816e-13bff539250b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.683766 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230217b5-781e-408c-816e-13bff539250b-kube-api-access-qnngx" (OuterVolumeSpecName: "kube-api-access-qnngx") pod "230217b5-781e-408c-816e-13bff539250b" (UID: "230217b5-781e-408c-816e-13bff539250b"). InnerVolumeSpecName "kube-api-access-qnngx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.720376 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnngx\" (UniqueName: \"kubernetes.io/projected/230217b5-781e-408c-816e-13bff539250b-kube-api-access-qnngx\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.720397 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/230217b5-781e-408c-816e-13bff539250b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.720406 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/230217b5-781e-408c-816e-13bff539250b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.764688 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.822267 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/529c6ab5-dcb4-4993-a063-07768425e6a0-kubelet-dir\") pod \"529c6ab5-dcb4-4993-a063-07768425e6a0\" (UID: \"529c6ab5-dcb4-4993-a063-07768425e6a0\") " Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.822528 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529c6ab5-dcb4-4993-a063-07768425e6a0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "529c6ab5-dcb4-4993-a063-07768425e6a0" (UID: "529c6ab5-dcb4-4993-a063-07768425e6a0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.822667 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/529c6ab5-dcb4-4993-a063-07768425e6a0-kube-api-access\") pod \"529c6ab5-dcb4-4993-a063-07768425e6a0\" (UID: \"529c6ab5-dcb4-4993-a063-07768425e6a0\") " Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.824959 4729 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/529c6ab5-dcb4-4993-a063-07768425e6a0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.830350 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529c6ab5-dcb4-4993-a063-07768425e6a0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "529c6ab5-dcb4-4993-a063-07768425e6a0" (UID: "529c6ab5-dcb4-4993-a063-07768425e6a0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:11 crc kubenswrapper[4729]: I0127 06:50:11.927111 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/529c6ab5-dcb4-4993-a063-07768425e6a0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:12 crc kubenswrapper[4729]: I0127 06:50:12.185998 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" Jan 27 06:50:12 crc kubenswrapper[4729]: I0127 06:50:12.186197 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-s9fpg" event={"ID":"230217b5-781e-408c-816e-13bff539250b","Type":"ContainerDied","Data":"cda3036c4f2d9c3d811d10e636ad168da78826ba872d9b82f7b76af99d5db5b0"} Jan 27 06:50:12 crc kubenswrapper[4729]: I0127 06:50:12.186267 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cda3036c4f2d9c3d811d10e636ad168da78826ba872d9b82f7b76af99d5db5b0" Jan 27 06:50:12 crc kubenswrapper[4729]: I0127 06:50:12.190703 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:50:12 crc kubenswrapper[4729]: I0127 06:50:12.191369 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"529c6ab5-dcb4-4993-a063-07768425e6a0","Type":"ContainerDied","Data":"a4708c1a84019a2a3ecf4bc5f4da35e6a0ebe62eca03d34c905cfd4bd5d5480e"} Jan 27 06:50:12 crc kubenswrapper[4729]: I0127 06:50:12.191414 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4708c1a84019a2a3ecf4bc5f4da35e6a0ebe62eca03d34c905cfd4bd5d5480e" Jan 27 06:50:12 crc kubenswrapper[4729]: I0127 06:50:12.254317 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:12 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:12 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:12 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:12 crc kubenswrapper[4729]: I0127 06:50:12.254384 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:13 crc kubenswrapper[4729]: I0127 06:50:13.245250 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:13 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:13 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:13 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:13 crc kubenswrapper[4729]: I0127 06:50:13.245313 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:14 crc kubenswrapper[4729]: I0127 06:50:14.246152 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:14 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:14 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:14 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:14 crc kubenswrapper[4729]: I0127 06:50:14.246614 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:15 crc kubenswrapper[4729]: I0127 06:50:15.027012 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlbz8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:50:15 crc kubenswrapper[4729]: I0127 06:50:15.027138 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jlbz8" podUID="849b4067-5e9e-4864-912a-d5a7aa747232" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:50:15 crc kubenswrapper[4729]: I0127 06:50:15.027212 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlbz8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:50:15 crc kubenswrapper[4729]: I0127 06:50:15.027229 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jlbz8" podUID="849b4067-5e9e-4864-912a-d5a7aa747232" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:50:15 crc kubenswrapper[4729]: I0127 06:50:15.244740 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:15 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:15 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:15 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:15 crc kubenswrapper[4729]: I0127 06:50:15.244807 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:16 crc kubenswrapper[4729]: I0127 06:50:16.190083 4729 patch_prober.go:28] interesting pod/console-f9d7485db-gw87z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 27 06:50:16 crc kubenswrapper[4729]: I0127 06:50:16.190149 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gw87z" podUID="43a6c23c-78b7-4a13-b1a4-efab2dc70130" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 27 06:50:16 crc kubenswrapper[4729]: I0127 06:50:16.244837 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:16 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:16 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:16 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:16 crc kubenswrapper[4729]: I0127 06:50:16.245466 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:17 crc kubenswrapper[4729]: I0127 06:50:17.245682 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:17 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:17 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:17 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:17 crc kubenswrapper[4729]: I0127 06:50:17.245751 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:18 crc kubenswrapper[4729]: I0127 06:50:18.247217 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:18 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 06:50:18 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:18 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:18 crc kubenswrapper[4729]: I0127 06:50:18.247541 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:19 crc kubenswrapper[4729]: I0127 06:50:19.246375 4729 patch_prober.go:28] interesting pod/router-default-5444994796-nl6sq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:50:19 crc kubenswrapper[4729]: [+]has-synced ok Jan 27 06:50:19 crc kubenswrapper[4729]: [+]process-running ok Jan 27 06:50:19 crc kubenswrapper[4729]: healthz check failed Jan 27 06:50:19 crc kubenswrapper[4729]: I0127 06:50:19.246440 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl6sq" podUID="5b3c2904-e0f2-436a-8172-6639cc9661a9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:20 crc kubenswrapper[4729]: I0127 06:50:20.245133 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:50:20 crc kubenswrapper[4729]: I0127 06:50:20.248545 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nl6sq" Jan 27 06:50:22 crc kubenswrapper[4729]: I0127 06:50:22.914803 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lsthv"] Jan 27 06:50:22 crc kubenswrapper[4729]: I0127 06:50:22.915274 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" podUID="b9b62aea-0185-4b23-998b-a210a5612512" containerName="controller-manager" containerID="cri-o://032b999a2619a1a156fa8896ae6c902e4136287f895264e8b052136adc00e7a4" gracePeriod=30 Jan 27 06:50:22 crc kubenswrapper[4729]: I0127 06:50:22.956862 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2"] Jan 27 06:50:22 crc kubenswrapper[4729]: I0127 06:50:22.957153 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" podUID="c64cf35a-6747-43f9-bde5-c8060518bcda" containerName="route-controller-manager" containerID="cri-o://433b982c1de85739731644e53044e3d05f315465fbcdf925d0fe17a84b79ac78" gracePeriod=30 Jan 27 06:50:24 crc kubenswrapper[4729]: I0127 06:50:24.413253 4729 generic.go:334] "Generic (PLEG): container finished" podID="b9b62aea-0185-4b23-998b-a210a5612512" containerID="032b999a2619a1a156fa8896ae6c902e4136287f895264e8b052136adc00e7a4" exitCode=0 Jan 27 06:50:24 crc kubenswrapper[4729]: I0127 06:50:24.413337 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" event={"ID":"b9b62aea-0185-4b23-998b-a210a5612512","Type":"ContainerDied","Data":"032b999a2619a1a156fa8896ae6c902e4136287f895264e8b052136adc00e7a4"} Jan 27 06:50:24 crc kubenswrapper[4729]: I0127 06:50:24.415508 4729 generic.go:334] "Generic (PLEG): container finished" podID="c64cf35a-6747-43f9-bde5-c8060518bcda" containerID="433b982c1de85739731644e53044e3d05f315465fbcdf925d0fe17a84b79ac78" exitCode=0 Jan 27 06:50:24 crc kubenswrapper[4729]: I0127 06:50:24.415542 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" event={"ID":"c64cf35a-6747-43f9-bde5-c8060518bcda","Type":"ContainerDied","Data":"433b982c1de85739731644e53044e3d05f315465fbcdf925d0fe17a84b79ac78"} Jan 27 06:50:24 crc kubenswrapper[4729]: I0127 06:50:24.554363 4729 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vzdl2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 06:50:24 crc kubenswrapper[4729]: I0127 06:50:24.554423 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" podUID="c64cf35a-6747-43f9-bde5-c8060518bcda" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 06:50:24 crc kubenswrapper[4729]: I0127 06:50:24.591090 4729 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lsthv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 06:50:24 crc kubenswrapper[4729]: I0127 06:50:24.591155 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" podUID="b9b62aea-0185-4b23-998b-a210a5612512" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 06:50:25 crc kubenswrapper[4729]: I0127 06:50:25.031663 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jlbz8" Jan 27 06:50:26 crc kubenswrapper[4729]: I0127 06:50:26.215594 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:50:26 crc kubenswrapper[4729]: I0127 06:50:26.222851 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-gw87z" Jan 27 06:50:27 crc kubenswrapper[4729]: I0127 06:50:27.890606 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.084837 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.109551 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-client-ca\") pod \"c64cf35a-6747-43f9-bde5-c8060518bcda\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.109597 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dvtb\" (UniqueName: \"kubernetes.io/projected/c64cf35a-6747-43f9-bde5-c8060518bcda-kube-api-access-2dvtb\") pod \"c64cf35a-6747-43f9-bde5-c8060518bcda\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.109631 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-config\") pod \"c64cf35a-6747-43f9-bde5-c8060518bcda\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.109697 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c64cf35a-6747-43f9-bde5-c8060518bcda-serving-cert\") pod \"c64cf35a-6747-43f9-bde5-c8060518bcda\" (UID: \"c64cf35a-6747-43f9-bde5-c8060518bcda\") " Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.111744 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-config" (OuterVolumeSpecName: "config") pod "c64cf35a-6747-43f9-bde5-c8060518bcda" (UID: "c64cf35a-6747-43f9-bde5-c8060518bcda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.112386 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-client-ca" (OuterVolumeSpecName: "client-ca") pod "c64cf35a-6747-43f9-bde5-c8060518bcda" (UID: "c64cf35a-6747-43f9-bde5-c8060518bcda"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.117336 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c64cf35a-6747-43f9-bde5-c8060518bcda-kube-api-access-2dvtb" (OuterVolumeSpecName: "kube-api-access-2dvtb") pod "c64cf35a-6747-43f9-bde5-c8060518bcda" (UID: "c64cf35a-6747-43f9-bde5-c8060518bcda"). InnerVolumeSpecName "kube-api-access-2dvtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.119431 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr"] Jan 27 06:50:28 crc kubenswrapper[4729]: E0127 06:50:28.119695 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230217b5-781e-408c-816e-13bff539250b" containerName="collect-profiles" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.119711 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="230217b5-781e-408c-816e-13bff539250b" containerName="collect-profiles" Jan 27 06:50:28 crc kubenswrapper[4729]: E0127 06:50:28.119741 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d720c2f-06c4-4b0e-b521-94c283ff3eb5" containerName="pruner" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.119747 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d720c2f-06c4-4b0e-b521-94c283ff3eb5" containerName="pruner" Jan 27 06:50:28 crc kubenswrapper[4729]: E0127 06:50:28.119756 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64cf35a-6747-43f9-bde5-c8060518bcda" containerName="route-controller-manager" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.119762 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64cf35a-6747-43f9-bde5-c8060518bcda" containerName="route-controller-manager" Jan 27 06:50:28 crc kubenswrapper[4729]: E0127 06:50:28.119768 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529c6ab5-dcb4-4993-a063-07768425e6a0" containerName="pruner" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.119774 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="529c6ab5-dcb4-4993-a063-07768425e6a0" containerName="pruner" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.119894 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="529c6ab5-dcb4-4993-a063-07768425e6a0" containerName="pruner" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.119904 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d720c2f-06c4-4b0e-b521-94c283ff3eb5" containerName="pruner" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.119911 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="c64cf35a-6747-43f9-bde5-c8060518bcda" containerName="route-controller-manager" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.119924 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="230217b5-781e-408c-816e-13bff539250b" containerName="collect-profiles" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.120366 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.121227 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c64cf35a-6747-43f9-bde5-c8060518bcda-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c64cf35a-6747-43f9-bde5-c8060518bcda" (UID: "c64cf35a-6747-43f9-bde5-c8060518bcda"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.135301 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr"] Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.212432 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.212464 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dvtb\" (UniqueName: \"kubernetes.io/projected/c64cf35a-6747-43f9-bde5-c8060518bcda-kube-api-access-2dvtb\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.212499 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c64cf35a-6747-43f9-bde5-c8060518bcda-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.212508 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c64cf35a-6747-43f9-bde5-c8060518bcda-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.314057 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-config\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.314170 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21110f0f-a257-44a8-9130-6c4b121c5cdb-serving-cert\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.314204 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-client-ca\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.314241 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v2g4\" (UniqueName: \"kubernetes.io/projected/21110f0f-a257-44a8-9130-6c4b121c5cdb-kube-api-access-4v2g4\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.415133 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-config\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.415208 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21110f0f-a257-44a8-9130-6c4b121c5cdb-serving-cert\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.415231 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-client-ca\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.415258 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v2g4\" (UniqueName: \"kubernetes.io/projected/21110f0f-a257-44a8-9130-6c4b121c5cdb-kube-api-access-4v2g4\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.416724 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-config\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.416820 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-client-ca\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.435791 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21110f0f-a257-44a8-9130-6c4b121c5cdb-serving-cert\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.442854 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v2g4\" (UniqueName: \"kubernetes.io/projected/21110f0f-a257-44a8-9130-6c4b121c5cdb-kube-api-access-4v2g4\") pod \"route-controller-manager-dddc87f8-tr8vr\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.443061 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" event={"ID":"c64cf35a-6747-43f9-bde5-c8060518bcda","Type":"ContainerDied","Data":"c6e9e92070b51c04aef0bd93a5c7e46e5e86980032f95442b5c24254a61ed460"} Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.443126 4729 scope.go:117] "RemoveContainer" containerID="433b982c1de85739731644e53044e3d05f315465fbcdf925d0fe17a84b79ac78" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.443247 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.461450 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.472868 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2"] Jan 27 06:50:28 crc kubenswrapper[4729]: I0127 06:50:28.475747 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzdl2"] Jan 27 06:50:30 crc kubenswrapper[4729]: I0127 06:50:30.370763 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c64cf35a-6747-43f9-bde5-c8060518bcda" path="/var/lib/kubelet/pods/c64cf35a-6747-43f9-bde5-c8060518bcda/volumes" Jan 27 06:50:31 crc kubenswrapper[4729]: I0127 06:50:31.087484 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:50:31 crc kubenswrapper[4729]: I0127 06:50:31.087564 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:50:35 crc kubenswrapper[4729]: I0127 06:50:35.591239 4729 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lsthv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 06:50:35 crc kubenswrapper[4729]: I0127 06:50:35.591646 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" podUID="b9b62aea-0185-4b23-998b-a210a5612512" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 06:50:36 crc kubenswrapper[4729]: I0127 06:50:36.639980 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nhsnw" Jan 27 06:50:42 crc kubenswrapper[4729]: I0127 06:50:42.818236 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 06:50:42 crc kubenswrapper[4729]: I0127 06:50:42.819819 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:50:42 crc kubenswrapper[4729]: I0127 06:50:42.822221 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 06:50:42 crc kubenswrapper[4729]: I0127 06:50:42.824285 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 06:50:42 crc kubenswrapper[4729]: I0127 06:50:42.841602 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 06:50:42 crc kubenswrapper[4729]: E0127 06:50:42.931616 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 06:50:42 crc kubenswrapper[4729]: E0127 06:50:42.932128 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjjns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wrxcn_openshift-marketplace(e500073e-46f7-4fa1-aaf9-f99824cefcc3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:50:42 crc kubenswrapper[4729]: E0127 06:50:42.933714 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wrxcn" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" Jan 27 06:50:43 crc kubenswrapper[4729]: I0127 06:50:43.018765 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9b17c6-9b38-41de-b12e-a121c4639b81-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c9b17c6-9b38-41de-b12e-a121c4639b81\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:50:43 crc kubenswrapper[4729]: I0127 06:50:43.019060 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c9b17c6-9b38-41de-b12e-a121c4639b81-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c9b17c6-9b38-41de-b12e-a121c4639b81\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:50:43 crc kubenswrapper[4729]: I0127 06:50:43.034540 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr"] Jan 27 06:50:43 crc kubenswrapper[4729]: I0127 06:50:43.120218 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9b17c6-9b38-41de-b12e-a121c4639b81-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c9b17c6-9b38-41de-b12e-a121c4639b81\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:50:43 crc kubenswrapper[4729]: I0127 06:50:43.120294 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c9b17c6-9b38-41de-b12e-a121c4639b81-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c9b17c6-9b38-41de-b12e-a121c4639b81\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:50:43 crc kubenswrapper[4729]: I0127 06:50:43.120383 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c9b17c6-9b38-41de-b12e-a121c4639b81-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c9b17c6-9b38-41de-b12e-a121c4639b81\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:50:43 crc kubenswrapper[4729]: I0127 06:50:43.153481 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9b17c6-9b38-41de-b12e-a121c4639b81-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c9b17c6-9b38-41de-b12e-a121c4639b81\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:50:43 crc kubenswrapper[4729]: I0127 06:50:43.163393 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:50:44 crc kubenswrapper[4729]: E0127 06:50:44.654547 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wrxcn" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.704389 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.742758 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b7b84f7fd-mqldf"] Jan 27 06:50:44 crc kubenswrapper[4729]: E0127 06:50:44.743049 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b62aea-0185-4b23-998b-a210a5612512" containerName="controller-manager" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.743060 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b62aea-0185-4b23-998b-a210a5612512" containerName="controller-manager" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.743172 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b62aea-0185-4b23-998b-a210a5612512" containerName="controller-manager" Jan 27 06:50:44 crc kubenswrapper[4729]: E0127 06:50:44.759537 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 06:50:44 crc kubenswrapper[4729]: E0127 06:50:44.759727 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhfg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-krr4s_openshift-marketplace(83578f10-10b1-4953-902d-cf066f164ffe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:50:44 crc kubenswrapper[4729]: E0127 06:50:44.760809 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-krr4s" podUID="83578f10-10b1-4953-902d-cf066f164ffe" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.760947 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.779577 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b7b84f7fd-mqldf"] Jan 27 06:50:44 crc kubenswrapper[4729]: E0127 06:50:44.793627 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 06:50:44 crc kubenswrapper[4729]: E0127 06:50:44.793769 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzddq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-978h9_openshift-marketplace(633788b3-11e3-447b-91cf-52a9563c052a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:50:44 crc kubenswrapper[4729]: E0127 06:50:44.795167 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-978h9" podUID="633788b3-11e3-447b-91cf-52a9563c052a" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.861402 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b62aea-0185-4b23-998b-a210a5612512-serving-cert\") pod \"b9b62aea-0185-4b23-998b-a210a5612512\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.861458 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-client-ca\") pod \"b9b62aea-0185-4b23-998b-a210a5612512\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.861486 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-config\") pod \"b9b62aea-0185-4b23-998b-a210a5612512\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.861519 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs68p\" (UniqueName: \"kubernetes.io/projected/b9b62aea-0185-4b23-998b-a210a5612512-kube-api-access-hs68p\") pod \"b9b62aea-0185-4b23-998b-a210a5612512\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.861608 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-proxy-ca-bundles\") pod \"b9b62aea-0185-4b23-998b-a210a5612512\" (UID: \"b9b62aea-0185-4b23-998b-a210a5612512\") " Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.861770 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-client-ca\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.861807 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkrc\" (UniqueName: \"kubernetes.io/projected/eaabfcb4-beec-42d5-85ed-37cec1692b7b-kube-api-access-tpkrc\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.861836 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-config\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.861890 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaabfcb4-beec-42d5-85ed-37cec1692b7b-serving-cert\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.861908 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-proxy-ca-bundles\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.863001 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9b62aea-0185-4b23-998b-a210a5612512" (UID: "b9b62aea-0185-4b23-998b-a210a5612512"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.863431 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-config" (OuterVolumeSpecName: "config") pod "b9b62aea-0185-4b23-998b-a210a5612512" (UID: "b9b62aea-0185-4b23-998b-a210a5612512"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.863542 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b9b62aea-0185-4b23-998b-a210a5612512" (UID: "b9b62aea-0185-4b23-998b-a210a5612512"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.866294 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b62aea-0185-4b23-998b-a210a5612512-kube-api-access-hs68p" (OuterVolumeSpecName: "kube-api-access-hs68p") pod "b9b62aea-0185-4b23-998b-a210a5612512" (UID: "b9b62aea-0185-4b23-998b-a210a5612512"). InnerVolumeSpecName "kube-api-access-hs68p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.866538 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b62aea-0185-4b23-998b-a210a5612512-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9b62aea-0185-4b23-998b-a210a5612512" (UID: "b9b62aea-0185-4b23-998b-a210a5612512"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.963214 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaabfcb4-beec-42d5-85ed-37cec1692b7b-serving-cert\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.963259 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-proxy-ca-bundles\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.963307 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-client-ca\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.963335 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpkrc\" (UniqueName: \"kubernetes.io/projected/eaabfcb4-beec-42d5-85ed-37cec1692b7b-kube-api-access-tpkrc\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.963364 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-config\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.963404 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.963416 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.963424 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs68p\" (UniqueName: \"kubernetes.io/projected/b9b62aea-0185-4b23-998b-a210a5612512-kube-api-access-hs68p\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.963434 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9b62aea-0185-4b23-998b-a210a5612512-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.963443 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b62aea-0185-4b23-998b-a210a5612512-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.964748 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-proxy-ca-bundles\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.966272 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaabfcb4-beec-42d5-85ed-37cec1692b7b-serving-cert\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.967219 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-config\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.975058 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-client-ca\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:44 crc kubenswrapper[4729]: I0127 06:50:44.981491 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpkrc\" (UniqueName: \"kubernetes.io/projected/eaabfcb4-beec-42d5-85ed-37cec1692b7b-kube-api-access-tpkrc\") pod \"controller-manager-b7b84f7fd-mqldf\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:45 crc kubenswrapper[4729]: I0127 06:50:45.089292 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:45 crc kubenswrapper[4729]: I0127 06:50:45.558868 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" Jan 27 06:50:45 crc kubenswrapper[4729]: I0127 06:50:45.559244 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" event={"ID":"b9b62aea-0185-4b23-998b-a210a5612512","Type":"ContainerDied","Data":"db04705c2a10f960cb387459fde22771749bf1df0472ed46ea7fc1ae00a07be7"} Jan 27 06:50:45 crc kubenswrapper[4729]: I0127 06:50:45.590965 4729 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lsthv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 06:50:45 crc kubenswrapper[4729]: I0127 06:50:45.591042 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lsthv" podUID="b9b62aea-0185-4b23-998b-a210a5612512" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 06:50:45 crc kubenswrapper[4729]: I0127 06:50:45.611038 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lsthv"] Jan 27 06:50:45 crc kubenswrapper[4729]: I0127 06:50:45.618720 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lsthv"] Jan 27 06:50:46 crc kubenswrapper[4729]: I0127 06:50:46.369791 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b62aea-0185-4b23-998b-a210a5612512" path="/var/lib/kubelet/pods/b9b62aea-0185-4b23-998b-a210a5612512/volumes" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.023187 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.028663 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.032751 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.221113 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-var-lock\") pod \"installer-9-crc\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.221174 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13f1651f-1170-4455-bbb3-bcf62eb786b7-kube-api-access\") pod \"installer-9-crc\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.221193 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.322006 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-var-lock\") pod \"installer-9-crc\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.322053 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-var-lock\") pod \"installer-9-crc\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.322144 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13f1651f-1170-4455-bbb3-bcf62eb786b7-kube-api-access\") pod \"installer-9-crc\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.322165 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.322271 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.343861 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13f1651f-1170-4455-bbb3-bcf62eb786b7-kube-api-access\") pod \"installer-9-crc\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.356643 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:48 crc kubenswrapper[4729]: E0127 06:50:48.866190 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-krr4s" podUID="83578f10-10b1-4953-902d-cf066f164ffe" Jan 27 06:50:48 crc kubenswrapper[4729]: E0127 06:50:48.868585 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-978h9" podUID="633788b3-11e3-447b-91cf-52a9563c052a" Jan 27 06:50:48 crc kubenswrapper[4729]: I0127 06:50:48.895147 4729 scope.go:117] "RemoveContainer" containerID="032b999a2619a1a156fa8896ae6c902e4136287f895264e8b052136adc00e7a4" Jan 27 06:50:48 crc kubenswrapper[4729]: E0127 06:50:48.985873 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 06:50:48 crc kubenswrapper[4729]: E0127 06:50:48.986042 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6zvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gv9zq_openshift-marketplace(f80985e9-d009-4ceb-bd8e-535ef0e0a9e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:50:48 crc kubenswrapper[4729]: E0127 06:50:48.990409 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gv9zq" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" Jan 27 06:50:49 crc kubenswrapper[4729]: E0127 06:50:49.012462 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 06:50:49 crc kubenswrapper[4729]: E0127 06:50:49.012637 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jh62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lbbks_openshift-marketplace(7ff397c2-8ed9-4073-ae6c-8600c382f227): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:50:49 crc kubenswrapper[4729]: E0127 06:50:49.013967 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lbbks" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" Jan 27 06:50:49 crc kubenswrapper[4729]: E0127 06:50:49.046280 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 06:50:49 crc kubenswrapper[4729]: E0127 06:50:49.046626 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wk55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vmbbk_openshift-marketplace(ef199f3e-dc03-4397-91ce-9da605f06991): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:50:49 crc kubenswrapper[4729]: E0127 06:50:49.048758 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vmbbk" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.187530 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr"] Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.431038 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.514799 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.591245 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b7b84f7fd-mqldf"] Jan 27 06:50:49 crc kubenswrapper[4729]: W0127 06:50:49.603043 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaabfcb4_beec_42d5_85ed_37cec1692b7b.slice/crio-0d65ff0102a9f5ca20b412501e939d8e5a587d1b4a5b980d29cb41904db2e14a WatchSource:0}: Error finding container 0d65ff0102a9f5ca20b412501e939d8e5a587d1b4a5b980d29cb41904db2e14a: Status 404 returned error can't find the container with id 0d65ff0102a9f5ca20b412501e939d8e5a587d1b4a5b980d29cb41904db2e14a Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.633472 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1c9b17c6-9b38-41de-b12e-a121c4639b81","Type":"ContainerStarted","Data":"a65140302c57710f63cd09ef5daa3213fcbba436a1aba6f15ec9bbabc383dbf6"} Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.637798 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" event={"ID":"eaabfcb4-beec-42d5-85ed-37cec1692b7b","Type":"ContainerStarted","Data":"0d65ff0102a9f5ca20b412501e939d8e5a587d1b4a5b980d29cb41904db2e14a"} Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.639876 4729 generic.go:334] "Generic (PLEG): container finished" podID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerID="4112be9c12f4fcc19378915c0f017e9219089d84713cd66048520684424f7c8c" exitCode=0 Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.639999 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxshj" event={"ID":"bf5b5ac2-9195-41a1-bb76-9017cf05397b","Type":"ContainerDied","Data":"4112be9c12f4fcc19378915c0f017e9219089d84713cd66048520684424f7c8c"} Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.642010 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"13f1651f-1170-4455-bbb3-bcf62eb786b7","Type":"ContainerStarted","Data":"3d725586f46b49789888e1fbcb40fb8827a33a66fd1f1dfbd886cc39643d26eb"} Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.645101 4729 generic.go:334] "Generic (PLEG): container finished" podID="f34f6802-9269-4d26-abee-6f480d374416" containerID="1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47" exitCode=0 Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.645159 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6kb" event={"ID":"f34f6802-9269-4d26-abee-6f480d374416","Type":"ContainerDied","Data":"1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47"} Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.657303 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" event={"ID":"21110f0f-a257-44a8-9130-6c4b121c5cdb","Type":"ContainerStarted","Data":"087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7"} Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.657349 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" event={"ID":"21110f0f-a257-44a8-9130-6c4b121c5cdb","Type":"ContainerStarted","Data":"1027eba1f2ec1200fd33117d9de39e9e286d98a0a5058d3429ca6de2cf7dfc39"} Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.657461 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" podUID="21110f0f-a257-44a8-9130-6c4b121c5cdb" containerName="route-controller-manager" containerID="cri-o://087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7" gracePeriod=30 Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.658122 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:49 crc kubenswrapper[4729]: E0127 06:50:49.674271 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gv9zq" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" Jan 27 06:50:49 crc kubenswrapper[4729]: E0127 06:50:49.676457 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lbbks" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" Jan 27 06:50:49 crc kubenswrapper[4729]: E0127 06:50:49.679055 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vmbbk" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.703389 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" podStartSLOduration=26.703373678 podStartE2EDuration="26.703373678s" podCreationTimestamp="2026-01-27 06:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:49.701275932 +0000 UTC m=+214.768397195" watchObservedRunningTime="2026-01-27 06:50:49.703373678 +0000 UTC m=+214.770494931" Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.877606 4729 patch_prober.go:28] interesting pod/route-controller-manager-dddc87f8-tr8vr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:50876->10.217.0.54:8443: read: connection reset by peer" start-of-body= Jan 27 06:50:49 crc kubenswrapper[4729]: I0127 06:50:49.878000 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" podUID="21110f0f-a257-44a8-9130-6c4b121c5cdb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:50876->10.217.0.54:8443: read: connection reset by peer" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.223686 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-dddc87f8-tr8vr_21110f0f-a257-44a8-9130-6c4b121c5cdb/route-controller-manager/0.log" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.223942 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.261566 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk"] Jan 27 06:50:50 crc kubenswrapper[4729]: E0127 06:50:50.261789 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21110f0f-a257-44a8-9130-6c4b121c5cdb" containerName="route-controller-manager" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.261806 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="21110f0f-a257-44a8-9130-6c4b121c5cdb" containerName="route-controller-manager" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.261904 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="21110f0f-a257-44a8-9130-6c4b121c5cdb" containerName="route-controller-manager" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.262387 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.283397 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk"] Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.347380 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v2g4\" (UniqueName: \"kubernetes.io/projected/21110f0f-a257-44a8-9130-6c4b121c5cdb-kube-api-access-4v2g4\") pod \"21110f0f-a257-44a8-9130-6c4b121c5cdb\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.347457 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21110f0f-a257-44a8-9130-6c4b121c5cdb-serving-cert\") pod \"21110f0f-a257-44a8-9130-6c4b121c5cdb\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.347492 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-config\") pod \"21110f0f-a257-44a8-9130-6c4b121c5cdb\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.347559 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-client-ca\") pod \"21110f0f-a257-44a8-9130-6c4b121c5cdb\" (UID: \"21110f0f-a257-44a8-9130-6c4b121c5cdb\") " Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.348489 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-client-ca" (OuterVolumeSpecName: "client-ca") pod "21110f0f-a257-44a8-9130-6c4b121c5cdb" (UID: "21110f0f-a257-44a8-9130-6c4b121c5cdb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.348910 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-config" (OuterVolumeSpecName: "config") pod "21110f0f-a257-44a8-9130-6c4b121c5cdb" (UID: "21110f0f-a257-44a8-9130-6c4b121c5cdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.352687 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21110f0f-a257-44a8-9130-6c4b121c5cdb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21110f0f-a257-44a8-9130-6c4b121c5cdb" (UID: "21110f0f-a257-44a8-9130-6c4b121c5cdb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.352735 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21110f0f-a257-44a8-9130-6c4b121c5cdb-kube-api-access-4v2g4" (OuterVolumeSpecName: "kube-api-access-4v2g4") pod "21110f0f-a257-44a8-9130-6c4b121c5cdb" (UID: "21110f0f-a257-44a8-9130-6c4b121c5cdb"). InnerVolumeSpecName "kube-api-access-4v2g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.449205 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggz4\" (UniqueName: \"kubernetes.io/projected/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-kube-api-access-4ggz4\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.449264 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-client-ca\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.449309 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-serving-cert\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.449332 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-config\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.449382 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21110f0f-a257-44a8-9130-6c4b121c5cdb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.449394 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.449402 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21110f0f-a257-44a8-9130-6c4b121c5cdb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.449410 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v2g4\" (UniqueName: \"kubernetes.io/projected/21110f0f-a257-44a8-9130-6c4b121c5cdb-kube-api-access-4v2g4\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.550771 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggz4\" (UniqueName: \"kubernetes.io/projected/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-kube-api-access-4ggz4\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.551221 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-client-ca\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.552083 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-client-ca\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.552146 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-serving-cert\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.552494 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-config\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.553451 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-config\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.556462 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-serving-cert\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.568863 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggz4\" (UniqueName: \"kubernetes.io/projected/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-kube-api-access-4ggz4\") pod \"route-controller-manager-6d7888bf8f-jq9mk\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.576558 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.687055 4729 generic.go:334] "Generic (PLEG): container finished" podID="1c9b17c6-9b38-41de-b12e-a121c4639b81" containerID="0b4439a2162d1e87d1c259abcb06948259a7b7d128e550efe68482127db7c436" exitCode=0 Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.687414 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1c9b17c6-9b38-41de-b12e-a121c4639b81","Type":"ContainerDied","Data":"0b4439a2162d1e87d1c259abcb06948259a7b7d128e550efe68482127db7c436"} Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.690836 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" event={"ID":"eaabfcb4-beec-42d5-85ed-37cec1692b7b","Type":"ContainerStarted","Data":"1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af"} Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.691815 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.693605 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxshj" event={"ID":"bf5b5ac2-9195-41a1-bb76-9017cf05397b","Type":"ContainerStarted","Data":"a2904954bd05c8c2ce0c06477799a1f8fe916a60190a3681b40d64f69d18db62"} Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.695138 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"13f1651f-1170-4455-bbb3-bcf62eb786b7","Type":"ContainerStarted","Data":"3872c471aa8d17b5f3bf76dd14ba61766892a7da9beffe0411a132eab4072393"} Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.701799 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6kb" event={"ID":"f34f6802-9269-4d26-abee-6f480d374416","Type":"ContainerStarted","Data":"6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542"} Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.716581 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.735796 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-dddc87f8-tr8vr_21110f0f-a257-44a8-9130-6c4b121c5cdb/route-controller-manager/0.log" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.735860 4729 generic.go:334] "Generic (PLEG): container finished" podID="21110f0f-a257-44a8-9130-6c4b121c5cdb" containerID="087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7" exitCode=255 Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.735913 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" event={"ID":"21110f0f-a257-44a8-9130-6c4b121c5cdb","Type":"ContainerDied","Data":"087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7"} Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.735944 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" event={"ID":"21110f0f-a257-44a8-9130-6c4b121c5cdb","Type":"ContainerDied","Data":"1027eba1f2ec1200fd33117d9de39e9e286d98a0a5058d3429ca6de2cf7dfc39"} Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.735963 4729 scope.go:117] "RemoveContainer" containerID="087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.736136 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.739583 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zxshj" podStartSLOduration=5.503149312 podStartE2EDuration="49.739564105s" podCreationTimestamp="2026-01-27 06:50:01 +0000 UTC" firstStartedPulling="2026-01-27 06:50:05.861003583 +0000 UTC m=+170.928124846" lastFinishedPulling="2026-01-27 06:50:50.097418376 +0000 UTC m=+215.164539639" observedRunningTime="2026-01-27 06:50:50.733236275 +0000 UTC m=+215.800357538" watchObservedRunningTime="2026-01-27 06:50:50.739564105 +0000 UTC m=+215.806685368" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.758624 4729 scope.go:117] "RemoveContainer" containerID="087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7" Jan 27 06:50:50 crc kubenswrapper[4729]: E0127 06:50:50.759684 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7\": container with ID starting with 087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7 not found: ID does not exist" containerID="087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.759730 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7"} err="failed to get container status \"087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7\": rpc error: code = NotFound desc = could not find container \"087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7\": container with ID starting with 087db93dba786a60aa9b89e9392267a30db1aa7d60923d367b7f87eadb7099d7 not found: ID does not exist" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.761242 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8w6kb" podStartSLOduration=4.258475927 podStartE2EDuration="49.761231139s" podCreationTimestamp="2026-01-27 06:50:01 +0000 UTC" firstStartedPulling="2026-01-27 06:50:04.611108554 +0000 UTC m=+169.678229817" lastFinishedPulling="2026-01-27 06:50:50.113863766 +0000 UTC m=+215.180985029" observedRunningTime="2026-01-27 06:50:50.758395869 +0000 UTC m=+215.825517132" watchObservedRunningTime="2026-01-27 06:50:50.761231139 +0000 UTC m=+215.828352402" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.786720 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.7866982829999998 podStartE2EDuration="2.786698283s" podCreationTimestamp="2026-01-27 06:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:50.784312538 +0000 UTC m=+215.851433801" watchObservedRunningTime="2026-01-27 06:50:50.786698283 +0000 UTC m=+215.853819546" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.811147 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" podStartSLOduration=8.811123695 podStartE2EDuration="8.811123695s" podCreationTimestamp="2026-01-27 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:50.806271441 +0000 UTC m=+215.873392704" watchObservedRunningTime="2026-01-27 06:50:50.811123695 +0000 UTC m=+215.878244958" Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.826688 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk"] Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.829871 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr"] Jan 27 06:50:50 crc kubenswrapper[4729]: I0127 06:50:50.834972 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dddc87f8-tr8vr"] Jan 27 06:50:51 crc kubenswrapper[4729]: I0127 06:50:51.741385 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" event={"ID":"0a5f0899-4e89-4d3b-9e23-0943b088c3fc","Type":"ContainerStarted","Data":"b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68"} Jan 27 06:50:51 crc kubenswrapper[4729]: I0127 06:50:51.741628 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" event={"ID":"0a5f0899-4e89-4d3b-9e23-0943b088c3fc","Type":"ContainerStarted","Data":"ee769f769fcc1d3321d845912c190bb2c7a0765dfcd17a555638ced4d471fe76"} Jan 27 06:50:51 crc kubenswrapper[4729]: I0127 06:50:51.742708 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:51 crc kubenswrapper[4729]: I0127 06:50:51.749878 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:50:51 crc kubenswrapper[4729]: I0127 06:50:51.788463 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" podStartSLOduration=8.788444982 podStartE2EDuration="8.788444982s" podCreationTimestamp="2026-01-27 06:50:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:51.766708446 +0000 UTC m=+216.833829709" watchObservedRunningTime="2026-01-27 06:50:51.788444982 +0000 UTC m=+216.855566245" Jan 27 06:50:51 crc kubenswrapper[4729]: I0127 06:50:51.894616 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:51 crc kubenswrapper[4729]: I0127 06:50:51.894659 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:50:51 crc kubenswrapper[4729]: I0127 06:50:51.930366 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:51 crc kubenswrapper[4729]: I0127 06:50:51.930412 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.033785 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.076671 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c9b17c6-9b38-41de-b12e-a121c4639b81-kubelet-dir\") pod \"1c9b17c6-9b38-41de-b12e-a121c4639b81\" (UID: \"1c9b17c6-9b38-41de-b12e-a121c4639b81\") " Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.076803 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9b17c6-9b38-41de-b12e-a121c4639b81-kube-api-access\") pod \"1c9b17c6-9b38-41de-b12e-a121c4639b81\" (UID: \"1c9b17c6-9b38-41de-b12e-a121c4639b81\") " Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.076805 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9b17c6-9b38-41de-b12e-a121c4639b81-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c9b17c6-9b38-41de-b12e-a121c4639b81" (UID: "1c9b17c6-9b38-41de-b12e-a121c4639b81"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.078099 4729 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c9b17c6-9b38-41de-b12e-a121c4639b81-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.095987 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9b17c6-9b38-41de-b12e-a121c4639b81-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c9b17c6-9b38-41de-b12e-a121c4639b81" (UID: "1c9b17c6-9b38-41de-b12e-a121c4639b81"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.179264 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9b17c6-9b38-41de-b12e-a121c4639b81-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.371393 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21110f0f-a257-44a8-9130-6c4b121c5cdb" path="/var/lib/kubelet/pods/21110f0f-a257-44a8-9130-6c4b121c5cdb/volumes" Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.750045 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1c9b17c6-9b38-41de-b12e-a121c4639b81","Type":"ContainerDied","Data":"a65140302c57710f63cd09ef5daa3213fcbba436a1aba6f15ec9bbabc383dbf6"} Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.750118 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a65140302c57710f63cd09ef5daa3213fcbba436a1aba6f15ec9bbabc383dbf6" Jan 27 06:50:52 crc kubenswrapper[4729]: I0127 06:50:52.750439 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:50:53 crc kubenswrapper[4729]: I0127 06:50:53.042686 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zxshj" podUID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerName="registry-server" probeResult="failure" output=< Jan 27 06:50:53 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 06:50:53 crc kubenswrapper[4729]: > Jan 27 06:50:53 crc kubenswrapper[4729]: I0127 06:50:53.045649 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8w6kb" podUID="f34f6802-9269-4d26-abee-6f480d374416" containerName="registry-server" probeResult="failure" output=< Jan 27 06:50:53 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 06:50:53 crc kubenswrapper[4729]: > Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.087011 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.087633 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.087697 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.088503 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd"} pod="openshift-machine-config-operator/machine-config-daemon-5x25t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.088599 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" containerID="cri-o://3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd" gracePeriod=600 Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.799568 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrxcn" event={"ID":"e500073e-46f7-4fa1-aaf9-f99824cefcc3","Type":"ContainerStarted","Data":"d7e72fdb0b21425a81206e5831c5515b6a51d5abfb52f2ad58f0884ecaf5da72"} Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.801560 4729 generic.go:334] "Generic (PLEG): container finished" podID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerID="3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd" exitCode=0 Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.801603 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerDied","Data":"3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd"} Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.803426 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krr4s" event={"ID":"83578f10-10b1-4953-902d-cf066f164ffe","Type":"ContainerStarted","Data":"560cf7bf6ed0eb41e4345bf07a9bc51dfcaa11038daaa6d98c117c8d88699b17"} Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.804795 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmbbk" event={"ID":"ef199f3e-dc03-4397-91ce-9da605f06991","Type":"ContainerStarted","Data":"bdc5a11f954523c3ba3f161f292f8e855641eafe9522fddc57d768ddb7d467ef"} Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.939123 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:51:01 crc kubenswrapper[4729]: I0127 06:51:01.978900 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.000990 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.057816 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.813584 4729 generic.go:334] "Generic (PLEG): container finished" podID="ef199f3e-dc03-4397-91ce-9da605f06991" containerID="bdc5a11f954523c3ba3f161f292f8e855641eafe9522fddc57d768ddb7d467ef" exitCode=0 Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.813754 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmbbk" event={"ID":"ef199f3e-dc03-4397-91ce-9da605f06991","Type":"ContainerDied","Data":"bdc5a11f954523c3ba3f161f292f8e855641eafe9522fddc57d768ddb7d467ef"} Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.819583 4729 generic.go:334] "Generic (PLEG): container finished" podID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerID="d7e72fdb0b21425a81206e5831c5515b6a51d5abfb52f2ad58f0884ecaf5da72" exitCode=0 Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.819769 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrxcn" event={"ID":"e500073e-46f7-4fa1-aaf9-f99824cefcc3","Type":"ContainerDied","Data":"d7e72fdb0b21425a81206e5831c5515b6a51d5abfb52f2ad58f0884ecaf5da72"} Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.827404 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerStarted","Data":"30caa52735f0f0b43749c3d281fcec796c7cc4b6db375d2df8aa10df7c4af05b"} Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.829816 4729 generic.go:334] "Generic (PLEG): container finished" podID="83578f10-10b1-4953-902d-cf066f164ffe" containerID="560cf7bf6ed0eb41e4345bf07a9bc51dfcaa11038daaa6d98c117c8d88699b17" exitCode=0 Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.830283 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krr4s" event={"ID":"83578f10-10b1-4953-902d-cf066f164ffe","Type":"ContainerDied","Data":"560cf7bf6ed0eb41e4345bf07a9bc51dfcaa11038daaa6d98c117c8d88699b17"} Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.905915 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk"] Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.906134 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" podUID="0a5f0899-4e89-4d3b-9e23-0943b088c3fc" containerName="route-controller-manager" containerID="cri-o://b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68" gracePeriod=30 Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.924864 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b7b84f7fd-mqldf"] Jan 27 06:51:02 crc kubenswrapper[4729]: I0127 06:51:02.925196 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" podUID="eaabfcb4-beec-42d5-85ed-37cec1692b7b" containerName="controller-manager" containerID="cri-o://1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af" gracePeriod=30 Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.424578 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8w6kb"] Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.523148 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.553982 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ggz4\" (UniqueName: \"kubernetes.io/projected/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-kube-api-access-4ggz4\") pod \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.554026 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-config\") pod \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.555009 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-config" (OuterVolumeSpecName: "config") pod "0a5f0899-4e89-4d3b-9e23-0943b088c3fc" (UID: "0a5f0899-4e89-4d3b-9e23-0943b088c3fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.555088 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-client-ca\") pod \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.555133 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-serving-cert\") pod \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\" (UID: \"0a5f0899-4e89-4d3b-9e23-0943b088c3fc\") " Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.558733 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-client-ca" (OuterVolumeSpecName: "client-ca") pod "0a5f0899-4e89-4d3b-9e23-0943b088c3fc" (UID: "0a5f0899-4e89-4d3b-9e23-0943b088c3fc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.562892 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-kube-api-access-4ggz4" (OuterVolumeSpecName: "kube-api-access-4ggz4") pod "0a5f0899-4e89-4d3b-9e23-0943b088c3fc" (UID: "0a5f0899-4e89-4d3b-9e23-0943b088c3fc"). InnerVolumeSpecName "kube-api-access-4ggz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.569518 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0a5f0899-4e89-4d3b-9e23-0943b088c3fc" (UID: "0a5f0899-4e89-4d3b-9e23-0943b088c3fc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.640055 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.656792 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.656829 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ggz4\" (UniqueName: \"kubernetes.io/projected/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-kube-api-access-4ggz4\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.656841 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.656853 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5f0899-4e89-4d3b-9e23-0943b088c3fc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.758133 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-config\") pod \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.758192 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-client-ca\") pod \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.758238 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpkrc\" (UniqueName: \"kubernetes.io/projected/eaabfcb4-beec-42d5-85ed-37cec1692b7b-kube-api-access-tpkrc\") pod \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.758260 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaabfcb4-beec-42d5-85ed-37cec1692b7b-serving-cert\") pod \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.758284 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-proxy-ca-bundles\") pod \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\" (UID: \"eaabfcb4-beec-42d5-85ed-37cec1692b7b\") " Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.759268 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eaabfcb4-beec-42d5-85ed-37cec1692b7b" (UID: "eaabfcb4-beec-42d5-85ed-37cec1692b7b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.759710 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-config" (OuterVolumeSpecName: "config") pod "eaabfcb4-beec-42d5-85ed-37cec1692b7b" (UID: "eaabfcb4-beec-42d5-85ed-37cec1692b7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.759952 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-client-ca" (OuterVolumeSpecName: "client-ca") pod "eaabfcb4-beec-42d5-85ed-37cec1692b7b" (UID: "eaabfcb4-beec-42d5-85ed-37cec1692b7b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.762880 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaabfcb4-beec-42d5-85ed-37cec1692b7b-kube-api-access-tpkrc" (OuterVolumeSpecName: "kube-api-access-tpkrc") pod "eaabfcb4-beec-42d5-85ed-37cec1692b7b" (UID: "eaabfcb4-beec-42d5-85ed-37cec1692b7b"). InnerVolumeSpecName "kube-api-access-tpkrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.762952 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaabfcb4-beec-42d5-85ed-37cec1692b7b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eaabfcb4-beec-42d5-85ed-37cec1692b7b" (UID: "eaabfcb4-beec-42d5-85ed-37cec1692b7b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.837256 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrxcn" event={"ID":"e500073e-46f7-4fa1-aaf9-f99824cefcc3","Type":"ContainerStarted","Data":"83ebd98a576fce99e168d8848daea9e68356da34267a1351fbfaab84e016dc42"} Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.841179 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krr4s" event={"ID":"83578f10-10b1-4953-902d-cf066f164ffe","Type":"ContainerStarted","Data":"b2829bfb59ef213051fadb00f5cc07119a4261c781bc19256eb14344d97b9488"} Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.845017 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmbbk" event={"ID":"ef199f3e-dc03-4397-91ce-9da605f06991","Type":"ContainerStarted","Data":"de217d23f5fa3e267a675013fb8e3ab9beba13c8ee5d3ac6171d06ba08aae673"} Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.847176 4729 generic.go:334] "Generic (PLEG): container finished" podID="0a5f0899-4e89-4d3b-9e23-0943b088c3fc" containerID="b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68" exitCode=0 Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.847235 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" event={"ID":"0a5f0899-4e89-4d3b-9e23-0943b088c3fc","Type":"ContainerDied","Data":"b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68"} Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.847258 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" event={"ID":"0a5f0899-4e89-4d3b-9e23-0943b088c3fc","Type":"ContainerDied","Data":"ee769f769fcc1d3321d845912c190bb2c7a0765dfcd17a555638ced4d471fe76"} Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.847278 4729 scope.go:117] "RemoveContainer" containerID="b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.847379 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.852233 4729 generic.go:334] "Generic (PLEG): container finished" podID="eaabfcb4-beec-42d5-85ed-37cec1692b7b" containerID="1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af" exitCode=0 Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.852915 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.855663 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" event={"ID":"eaabfcb4-beec-42d5-85ed-37cec1692b7b","Type":"ContainerDied","Data":"1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af"} Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.855711 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b7b84f7fd-mqldf" event={"ID":"eaabfcb4-beec-42d5-85ed-37cec1692b7b","Type":"ContainerDied","Data":"0d65ff0102a9f5ca20b412501e939d8e5a587d1b4a5b980d29cb41904db2e14a"} Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.855888 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8w6kb" podUID="f34f6802-9269-4d26-abee-6f480d374416" containerName="registry-server" containerID="cri-o://6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542" gracePeriod=2 Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.861915 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.861956 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpkrc\" (UniqueName: \"kubernetes.io/projected/eaabfcb4-beec-42d5-85ed-37cec1692b7b-kube-api-access-tpkrc\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.861970 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaabfcb4-beec-42d5-85ed-37cec1692b7b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.861984 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.861995 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaabfcb4-beec-42d5-85ed-37cec1692b7b-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.874275 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wrxcn" podStartSLOduration=4.65760744 podStartE2EDuration="1m0.874247282s" podCreationTimestamp="2026-01-27 06:50:03 +0000 UTC" firstStartedPulling="2026-01-27 06:50:07.015425151 +0000 UTC m=+172.082546414" lastFinishedPulling="2026-01-27 06:51:03.232064993 +0000 UTC m=+228.299186256" observedRunningTime="2026-01-27 06:51:03.863300667 +0000 UTC m=+228.930421950" watchObservedRunningTime="2026-01-27 06:51:03.874247282 +0000 UTC m=+228.941368545" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.880780 4729 scope.go:117] "RemoveContainer" containerID="b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68" Jan 27 06:51:03 crc kubenswrapper[4729]: E0127 06:51:03.881246 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68\": container with ID starting with b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68 not found: ID does not exist" containerID="b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.881300 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68"} err="failed to get container status \"b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68\": rpc error: code = NotFound desc = could not find container \"b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68\": container with ID starting with b1e6cd52c2b9d95de5f6cfcda899d87574a42a769d56c64a1cca02414a381f68 not found: ID does not exist" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.881335 4729 scope.go:117] "RemoveContainer" containerID="1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.895391 4729 scope.go:117] "RemoveContainer" containerID="1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.898723 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-krr4s" podStartSLOduration=3.3795162579999998 podStartE2EDuration="1m0.898712255s" podCreationTimestamp="2026-01-27 06:50:03 +0000 UTC" firstStartedPulling="2026-01-27 06:50:05.754061806 +0000 UTC m=+170.821183069" lastFinishedPulling="2026-01-27 06:51:03.273257803 +0000 UTC m=+228.340379066" observedRunningTime="2026-01-27 06:51:03.896054852 +0000 UTC m=+228.963176135" watchObservedRunningTime="2026-01-27 06:51:03.898712255 +0000 UTC m=+228.965833518" Jan 27 06:51:03 crc kubenswrapper[4729]: E0127 06:51:03.904093 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af\": container with ID starting with 1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af not found: ID does not exist" containerID="1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.904165 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af"} err="failed to get container status \"1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af\": rpc error: code = NotFound desc = could not find container \"1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af\": container with ID starting with 1e846439b05c93a52ea6688dd5ab0e6bdfd36609ce5612e4fa3aed0cbb0046af not found: ID does not exist" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.949037 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vmbbk" podStartSLOduration=3.479748352 podStartE2EDuration="59.949019513s" podCreationTimestamp="2026-01-27 06:50:04 +0000 UTC" firstStartedPulling="2026-01-27 06:50:06.937332751 +0000 UTC m=+172.004454014" lastFinishedPulling="2026-01-27 06:51:03.406603912 +0000 UTC m=+228.473725175" observedRunningTime="2026-01-27 06:51:03.936345733 +0000 UTC m=+229.003466996" watchObservedRunningTime="2026-01-27 06:51:03.949019513 +0000 UTC m=+229.016140786" Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.951776 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk"] Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.961581 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7888bf8f-jq9mk"] Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.989468 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b7b84f7fd-mqldf"] Jan 27 06:51:03 crc kubenswrapper[4729]: I0127 06:51:03.995282 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b7b84f7fd-mqldf"] Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.297157 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.297489 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.389591 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5f0899-4e89-4d3b-9e23-0943b088c3fc" path="/var/lib/kubelet/pods/0a5f0899-4e89-4d3b-9e23-0943b088c3fc/volumes" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.390328 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaabfcb4-beec-42d5-85ed-37cec1692b7b" path="/var/lib/kubelet/pods/eaabfcb4-beec-42d5-85ed-37cec1692b7b/volumes" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.394226 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.481423 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7bzm\" (UniqueName: \"kubernetes.io/projected/f34f6802-9269-4d26-abee-6f480d374416-kube-api-access-j7bzm\") pod \"f34f6802-9269-4d26-abee-6f480d374416\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.481552 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-utilities\") pod \"f34f6802-9269-4d26-abee-6f480d374416\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.481629 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-catalog-content\") pod \"f34f6802-9269-4d26-abee-6f480d374416\" (UID: \"f34f6802-9269-4d26-abee-6f480d374416\") " Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.483656 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-utilities" (OuterVolumeSpecName: "utilities") pod "f34f6802-9269-4d26-abee-6f480d374416" (UID: "f34f6802-9269-4d26-abee-6f480d374416"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.488392 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34f6802-9269-4d26-abee-6f480d374416-kube-api-access-j7bzm" (OuterVolumeSpecName: "kube-api-access-j7bzm") pod "f34f6802-9269-4d26-abee-6f480d374416" (UID: "f34f6802-9269-4d26-abee-6f480d374416"). InnerVolumeSpecName "kube-api-access-j7bzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.582689 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.582719 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7bzm\" (UniqueName: \"kubernetes.io/projected/f34f6802-9269-4d26-abee-6f480d374416-kube-api-access-j7bzm\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.653263 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f34f6802-9269-4d26-abee-6f480d374416" (UID: "f34f6802-9269-4d26-abee-6f480d374416"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.683891 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f34f6802-9269-4d26-abee-6f480d374416-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.858046 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbbks" event={"ID":"7ff397c2-8ed9-4073-ae6c-8600c382f227","Type":"ContainerStarted","Data":"fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102"} Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.861861 4729 generic.go:334] "Generic (PLEG): container finished" podID="633788b3-11e3-447b-91cf-52a9563c052a" containerID="ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334" exitCode=0 Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.861896 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-978h9" event={"ID":"633788b3-11e3-447b-91cf-52a9563c052a","Type":"ContainerDied","Data":"ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334"} Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.863442 4729 generic.go:334] "Generic (PLEG): container finished" podID="f34f6802-9269-4d26-abee-6f480d374416" containerID="6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542" exitCode=0 Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.863499 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6kb" event={"ID":"f34f6802-9269-4d26-abee-6f480d374416","Type":"ContainerDied","Data":"6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542"} Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.863522 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6kb" event={"ID":"f34f6802-9269-4d26-abee-6f480d374416","Type":"ContainerDied","Data":"bd76cfbdd7d1fb31d8bebdf641a8c97655728dc20fc061a39f105607776b661d"} Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.863497 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6kb" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.863542 4729 scope.go:117] "RemoveContainer" containerID="6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.867193 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9zq" event={"ID":"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1","Type":"ContainerStarted","Data":"bbf06a1c35f1a978c5676d284c6c4249eafe7e5f648b1b301262c4fd74df437b"} Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.881686 4729 scope.go:117] "RemoveContainer" containerID="1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.900963 4729 scope.go:117] "RemoveContainer" containerID="4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916531 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bb6785bdf-n4pvt"] Jan 27 06:51:04 crc kubenswrapper[4729]: E0127 06:51:04.916728 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34f6802-9269-4d26-abee-6f480d374416" containerName="extract-content" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916739 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34f6802-9269-4d26-abee-6f480d374416" containerName="extract-content" Jan 27 06:51:04 crc kubenswrapper[4729]: E0127 06:51:04.916754 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34f6802-9269-4d26-abee-6f480d374416" containerName="extract-utilities" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916760 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34f6802-9269-4d26-abee-6f480d374416" containerName="extract-utilities" Jan 27 06:51:04 crc kubenswrapper[4729]: E0127 06:51:04.916771 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5f0899-4e89-4d3b-9e23-0943b088c3fc" containerName="route-controller-manager" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916776 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5f0899-4e89-4d3b-9e23-0943b088c3fc" containerName="route-controller-manager" Jan 27 06:51:04 crc kubenswrapper[4729]: E0127 06:51:04.916784 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaabfcb4-beec-42d5-85ed-37cec1692b7b" containerName="controller-manager" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916790 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaabfcb4-beec-42d5-85ed-37cec1692b7b" containerName="controller-manager" Jan 27 06:51:04 crc kubenswrapper[4729]: E0127 06:51:04.916799 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34f6802-9269-4d26-abee-6f480d374416" containerName="registry-server" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916804 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34f6802-9269-4d26-abee-6f480d374416" containerName="registry-server" Jan 27 06:51:04 crc kubenswrapper[4729]: E0127 06:51:04.916815 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9b17c6-9b38-41de-b12e-a121c4639b81" containerName="pruner" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916820 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9b17c6-9b38-41de-b12e-a121c4639b81" containerName="pruner" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916908 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaabfcb4-beec-42d5-85ed-37cec1692b7b" containerName="controller-manager" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916922 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5f0899-4e89-4d3b-9e23-0943b088c3fc" containerName="route-controller-manager" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916930 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34f6802-9269-4d26-abee-6f480d374416" containerName="registry-server" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.916941 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9b17c6-9b38-41de-b12e-a121c4639b81" containerName="pruner" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.917291 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.917368 4729 scope.go:117] "RemoveContainer" containerID="6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542" Jan 27 06:51:04 crc kubenswrapper[4729]: E0127 06:51:04.918060 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542\": container with ID starting with 6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542 not found: ID does not exist" containerID="6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.918104 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542"} err="failed to get container status \"6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542\": rpc error: code = NotFound desc = could not find container \"6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542\": container with ID starting with 6d92e069da66d0f36a56a69288736cd02bdb016635ef51ead0823a121c039542 not found: ID does not exist" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.918127 4729 scope.go:117] "RemoveContainer" containerID="1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.919992 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.920173 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.920680 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.920773 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.921634 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.921827 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:51:04 crc kubenswrapper[4729]: E0127 06:51:04.925345 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47\": container with ID starting with 1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47 not found: ID does not exist" containerID="1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.925413 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47"} err="failed to get container status \"1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47\": rpc error: code = NotFound desc = could not find container \"1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47\": container with ID starting with 1861f047ba305d095de6ab305bd899c87022dc21b6b3adf762495c865713ed47 not found: ID does not exist" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.925448 4729 scope.go:117] "RemoveContainer" containerID="4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d" Jan 27 06:51:04 crc kubenswrapper[4729]: E0127 06:51:04.926620 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d\": container with ID starting with 4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d not found: ID does not exist" containerID="4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.926659 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d"} err="failed to get container status \"4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d\": rpc error: code = NotFound desc = could not find container \"4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d\": container with ID starting with 4e6d443a59dd2ac16d146c50045f91fdf287a6553e805e9921299bd24e4c518d not found: ID does not exist" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.928444 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.933018 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv"] Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.933666 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.938964 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.940256 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.940466 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.940770 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.940895 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.941875 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.957925 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bb6785bdf-n4pvt"] Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.960874 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv"] Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.988826 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b247edce-76c1-48ba-954f-c870ba476d9b-serving-cert\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.988876 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-config\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.988901 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-config\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.988922 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-client-ca\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.988987 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-proxy-ca-bundles\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.989010 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-client-ca\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.989037 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4b82\" (UniqueName: \"kubernetes.io/projected/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-kube-api-access-z4b82\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.989090 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-serving-cert\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.989174 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgfz\" (UniqueName: \"kubernetes.io/projected/b247edce-76c1-48ba-954f-c870ba476d9b-kube-api-access-7kgfz\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.990754 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8w6kb"] Jan 27 06:51:04 crc kubenswrapper[4729]: I0127 06:51:04.998081 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8w6kb"] Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.063241 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.063303 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.090227 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4b82\" (UniqueName: \"kubernetes.io/projected/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-kube-api-access-z4b82\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.090318 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-serving-cert\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.090363 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgfz\" (UniqueName: \"kubernetes.io/projected/b247edce-76c1-48ba-954f-c870ba476d9b-kube-api-access-7kgfz\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.090407 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b247edce-76c1-48ba-954f-c870ba476d9b-serving-cert\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.090425 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-config\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.091697 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-config\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.092307 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-config\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.092386 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-config\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.092408 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-client-ca\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.092471 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-proxy-ca-bundles\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.092501 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-client-ca\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.093115 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-client-ca\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.093607 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-client-ca\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.100268 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-serving-cert\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.107790 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b247edce-76c1-48ba-954f-c870ba476d9b-serving-cert\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.116508 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-proxy-ca-bundles\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.129498 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fb2x5"] Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.132261 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgfz\" (UniqueName: \"kubernetes.io/projected/b247edce-76c1-48ba-954f-c870ba476d9b-kube-api-access-7kgfz\") pod \"route-controller-manager-7f4689fd6c-7qjgv\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.136489 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4b82\" (UniqueName: \"kubernetes.io/projected/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-kube-api-access-z4b82\") pod \"controller-manager-bb6785bdf-n4pvt\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.242860 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.252143 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.367793 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wrxcn" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerName="registry-server" probeResult="failure" output=< Jan 27 06:51:05 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 06:51:05 crc kubenswrapper[4729]: > Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.545785 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bb6785bdf-n4pvt"] Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.640386 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv"] Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.872990 4729 generic.go:334] "Generic (PLEG): container finished" podID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerID="bbf06a1c35f1a978c5676d284c6c4249eafe7e5f648b1b301262c4fd74df437b" exitCode=0 Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.873115 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9zq" event={"ID":"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1","Type":"ContainerDied","Data":"bbf06a1c35f1a978c5676d284c6c4249eafe7e5f648b1b301262c4fd74df437b"} Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.875864 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" event={"ID":"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9","Type":"ContainerStarted","Data":"daca4f74ae78a15eb96d4a2d4f0a9ee6d334051533e78d8901edec13afa44e7b"} Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.879635 4729 generic.go:334] "Generic (PLEG): container finished" podID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerID="fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102" exitCode=0 Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.879723 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbbks" event={"ID":"7ff397c2-8ed9-4073-ae6c-8600c382f227","Type":"ContainerDied","Data":"fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102"} Jan 27 06:51:05 crc kubenswrapper[4729]: I0127 06:51:05.880665 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" event={"ID":"b247edce-76c1-48ba-954f-c870ba476d9b","Type":"ContainerStarted","Data":"e70bf4ca84f987816b207943de4814ec6a810ff4dd86ed7d5b0b3ac235ff1006"} Jan 27 06:51:06 crc kubenswrapper[4729]: I0127 06:51:06.123788 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vmbbk" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" containerName="registry-server" probeResult="failure" output=< Jan 27 06:51:06 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 06:51:06 crc kubenswrapper[4729]: > Jan 27 06:51:06 crc kubenswrapper[4729]: I0127 06:51:06.368798 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34f6802-9269-4d26-abee-6f480d374416" path="/var/lib/kubelet/pods/f34f6802-9269-4d26-abee-6f480d374416/volumes" Jan 27 06:51:06 crc kubenswrapper[4729]: I0127 06:51:06.887742 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" event={"ID":"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9","Type":"ContainerStarted","Data":"d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2"} Jan 27 06:51:06 crc kubenswrapper[4729]: I0127 06:51:06.889005 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:06 crc kubenswrapper[4729]: I0127 06:51:06.890310 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" event={"ID":"b247edce-76c1-48ba-954f-c870ba476d9b","Type":"ContainerStarted","Data":"e00abb39a9628e382877810e21c20fdfa07c9a3c58968519f9ce080bf41ae23a"} Jan 27 06:51:06 crc kubenswrapper[4729]: I0127 06:51:06.890652 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:06 crc kubenswrapper[4729]: I0127 06:51:06.896835 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:06 crc kubenswrapper[4729]: I0127 06:51:06.898374 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:06 crc kubenswrapper[4729]: I0127 06:51:06.904451 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" podStartSLOduration=3.904433319 podStartE2EDuration="3.904433319s" podCreationTimestamp="2026-01-27 06:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:51:06.903736156 +0000 UTC m=+231.970857419" watchObservedRunningTime="2026-01-27 06:51:06.904433319 +0000 UTC m=+231.971554582" Jan 27 06:51:06 crc kubenswrapper[4729]: I0127 06:51:06.944996 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" podStartSLOduration=4.9449802290000004 podStartE2EDuration="4.944980229s" podCreationTimestamp="2026-01-27 06:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:51:06.942968805 +0000 UTC m=+232.010090078" watchObservedRunningTime="2026-01-27 06:51:06.944980229 +0000 UTC m=+232.012101492" Jan 27 06:51:08 crc kubenswrapper[4729]: I0127 06:51:08.906204 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-978h9" event={"ID":"633788b3-11e3-447b-91cf-52a9563c052a","Type":"ContainerStarted","Data":"84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457"} Jan 27 06:51:09 crc kubenswrapper[4729]: I0127 06:51:09.927338 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-978h9" podStartSLOduration=6.078728408 podStartE2EDuration="1m8.927323534s" podCreationTimestamp="2026-01-27 06:50:01 +0000 UTC" firstStartedPulling="2026-01-27 06:50:05.685880064 +0000 UTC m=+170.753001327" lastFinishedPulling="2026-01-27 06:51:08.53447519 +0000 UTC m=+233.601596453" observedRunningTime="2026-01-27 06:51:09.92592394 +0000 UTC m=+234.993045203" watchObservedRunningTime="2026-01-27 06:51:09.927323534 +0000 UTC m=+234.994444797" Jan 27 06:51:11 crc kubenswrapper[4729]: I0127 06:51:11.919883 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9zq" event={"ID":"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1","Type":"ContainerStarted","Data":"4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a"} Jan 27 06:51:11 crc kubenswrapper[4729]: I0127 06:51:11.943973 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gv9zq" podStartSLOduration=5.271656988 podStartE2EDuration="1m7.943959548s" podCreationTimestamp="2026-01-27 06:50:04 +0000 UTC" firstStartedPulling="2026-01-27 06:50:08.012358239 +0000 UTC m=+173.079479502" lastFinishedPulling="2026-01-27 06:51:10.684660789 +0000 UTC m=+235.751782062" observedRunningTime="2026-01-27 06:51:11.93893635 +0000 UTC m=+237.006057613" watchObservedRunningTime="2026-01-27 06:51:11.943959548 +0000 UTC m=+237.011080811" Jan 27 06:51:11 crc kubenswrapper[4729]: I0127 06:51:11.974682 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-978h9" Jan 27 06:51:11 crc kubenswrapper[4729]: I0127 06:51:11.975180 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-978h9" Jan 27 06:51:12 crc kubenswrapper[4729]: I0127 06:51:12.018062 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-978h9" Jan 27 06:51:13 crc kubenswrapper[4729]: I0127 06:51:13.599024 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:51:13 crc kubenswrapper[4729]: I0127 06:51:13.599361 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:51:13 crc kubenswrapper[4729]: I0127 06:51:13.636374 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:51:13 crc kubenswrapper[4729]: I0127 06:51:13.963342 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:51:14 crc kubenswrapper[4729]: I0127 06:51:14.368418 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:51:14 crc kubenswrapper[4729]: I0127 06:51:14.405362 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:51:14 crc kubenswrapper[4729]: I0127 06:51:14.747824 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:51:14 crc kubenswrapper[4729]: I0127 06:51:14.747890 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:51:14 crc kubenswrapper[4729]: I0127 06:51:14.934089 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbbks" event={"ID":"7ff397c2-8ed9-4073-ae6c-8600c382f227","Type":"ContainerStarted","Data":"20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a"} Jan 27 06:51:15 crc kubenswrapper[4729]: I0127 06:51:15.120898 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:51:15 crc kubenswrapper[4729]: I0127 06:51:15.155713 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:51:15 crc kubenswrapper[4729]: I0127 06:51:15.785571 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gv9zq" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerName="registry-server" probeResult="failure" output=< Jan 27 06:51:15 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 06:51:15 crc kubenswrapper[4729]: > Jan 27 06:51:16 crc kubenswrapper[4729]: I0127 06:51:16.394013 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lbbks" podStartSLOduration=8.173355664 podStartE2EDuration="1m15.39399302s" podCreationTimestamp="2026-01-27 06:50:01 +0000 UTC" firstStartedPulling="2026-01-27 06:50:05.796505627 +0000 UTC m=+170.863626890" lastFinishedPulling="2026-01-27 06:51:13.017142983 +0000 UTC m=+238.084264246" observedRunningTime="2026-01-27 06:51:15.962773759 +0000 UTC m=+241.029895022" watchObservedRunningTime="2026-01-27 06:51:16.39399302 +0000 UTC m=+241.461114283" Jan 27 06:51:16 crc kubenswrapper[4729]: I0127 06:51:16.397473 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrxcn"] Jan 27 06:51:16 crc kubenswrapper[4729]: I0127 06:51:16.397868 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wrxcn" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerName="registry-server" containerID="cri-o://83ebd98a576fce99e168d8848daea9e68356da34267a1351fbfaab84e016dc42" gracePeriod=2 Jan 27 06:51:18 crc kubenswrapper[4729]: I0127 06:51:18.956738 4729 generic.go:334] "Generic (PLEG): container finished" podID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerID="83ebd98a576fce99e168d8848daea9e68356da34267a1351fbfaab84e016dc42" exitCode=0 Jan 27 06:51:18 crc kubenswrapper[4729]: I0127 06:51:18.956831 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrxcn" event={"ID":"e500073e-46f7-4fa1-aaf9-f99824cefcc3","Type":"ContainerDied","Data":"83ebd98a576fce99e168d8848daea9e68356da34267a1351fbfaab84e016dc42"} Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.396681 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmbbk"] Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.397018 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vmbbk" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" containerName="registry-server" containerID="cri-o://de217d23f5fa3e267a675013fb8e3ab9beba13c8ee5d3ac6171d06ba08aae673" gracePeriod=2 Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.619805 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.689641 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-utilities\") pod \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.689688 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjjns\" (UniqueName: \"kubernetes.io/projected/e500073e-46f7-4fa1-aaf9-f99824cefcc3-kube-api-access-zjjns\") pod \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.689735 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-catalog-content\") pod \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\" (UID: \"e500073e-46f7-4fa1-aaf9-f99824cefcc3\") " Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.691085 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-utilities" (OuterVolumeSpecName: "utilities") pod "e500073e-46f7-4fa1-aaf9-f99824cefcc3" (UID: "e500073e-46f7-4fa1-aaf9-f99824cefcc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.699601 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e500073e-46f7-4fa1-aaf9-f99824cefcc3-kube-api-access-zjjns" (OuterVolumeSpecName: "kube-api-access-zjjns") pod "e500073e-46f7-4fa1-aaf9-f99824cefcc3" (UID: "e500073e-46f7-4fa1-aaf9-f99824cefcc3"). InnerVolumeSpecName "kube-api-access-zjjns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.710837 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e500073e-46f7-4fa1-aaf9-f99824cefcc3" (UID: "e500073e-46f7-4fa1-aaf9-f99824cefcc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.792045 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjjns\" (UniqueName: \"kubernetes.io/projected/e500073e-46f7-4fa1-aaf9-f99824cefcc3-kube-api-access-zjjns\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.792474 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.792495 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e500073e-46f7-4fa1-aaf9-f99824cefcc3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.968161 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrxcn" event={"ID":"e500073e-46f7-4fa1-aaf9-f99824cefcc3","Type":"ContainerDied","Data":"ead6b192054b298a632f4a08a445ac7ab8e064596df9d5e010a88375b2f59578"} Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.968235 4729 scope.go:117] "RemoveContainer" containerID="83ebd98a576fce99e168d8848daea9e68356da34267a1351fbfaab84e016dc42" Jan 27 06:51:19 crc kubenswrapper[4729]: I0127 06:51:19.968273 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrxcn" Jan 27 06:51:20 crc kubenswrapper[4729]: I0127 06:51:19.997424 4729 scope.go:117] "RemoveContainer" containerID="d7e72fdb0b21425a81206e5831c5515b6a51d5abfb52f2ad58f0884ecaf5da72" Jan 27 06:51:20 crc kubenswrapper[4729]: I0127 06:51:20.037451 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrxcn"] Jan 27 06:51:20 crc kubenswrapper[4729]: I0127 06:51:20.041125 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrxcn"] Jan 27 06:51:20 crc kubenswrapper[4729]: I0127 06:51:20.048485 4729 scope.go:117] "RemoveContainer" containerID="e59edaae43ba74e54946e7b1ef26895a612251340f7728304d6b2270398b4536" Jan 27 06:51:20 crc kubenswrapper[4729]: I0127 06:51:20.373755 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" path="/var/lib/kubelet/pods/e500073e-46f7-4fa1-aaf9-f99824cefcc3/volumes" Jan 27 06:51:20 crc kubenswrapper[4729]: I0127 06:51:20.977346 4729 generic.go:334] "Generic (PLEG): container finished" podID="ef199f3e-dc03-4397-91ce-9da605f06991" containerID="de217d23f5fa3e267a675013fb8e3ab9beba13c8ee5d3ac6171d06ba08aae673" exitCode=0 Jan 27 06:51:20 crc kubenswrapper[4729]: I0127 06:51:20.977470 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmbbk" event={"ID":"ef199f3e-dc03-4397-91ce-9da605f06991","Type":"ContainerDied","Data":"de217d23f5fa3e267a675013fb8e3ab9beba13c8ee5d3ac6171d06ba08aae673"} Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.553787 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.626129 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-utilities\") pod \"ef199f3e-dc03-4397-91ce-9da605f06991\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.626534 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wk55\" (UniqueName: \"kubernetes.io/projected/ef199f3e-dc03-4397-91ce-9da605f06991-kube-api-access-8wk55\") pod \"ef199f3e-dc03-4397-91ce-9da605f06991\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.626599 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-catalog-content\") pod \"ef199f3e-dc03-4397-91ce-9da605f06991\" (UID: \"ef199f3e-dc03-4397-91ce-9da605f06991\") " Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.628719 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-utilities" (OuterVolumeSpecName: "utilities") pod "ef199f3e-dc03-4397-91ce-9da605f06991" (UID: "ef199f3e-dc03-4397-91ce-9da605f06991"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.631866 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef199f3e-dc03-4397-91ce-9da605f06991-kube-api-access-8wk55" (OuterVolumeSpecName: "kube-api-access-8wk55") pod "ef199f3e-dc03-4397-91ce-9da605f06991" (UID: "ef199f3e-dc03-4397-91ce-9da605f06991"). InnerVolumeSpecName "kube-api-access-8wk55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.728252 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.728622 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wk55\" (UniqueName: \"kubernetes.io/projected/ef199f3e-dc03-4397-91ce-9da605f06991-kube-api-access-8wk55\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.925482 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef199f3e-dc03-4397-91ce-9da605f06991" (UID: "ef199f3e-dc03-4397-91ce-9da605f06991"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.932135 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef199f3e-dc03-4397-91ce-9da605f06991-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.990409 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmbbk" event={"ID":"ef199f3e-dc03-4397-91ce-9da605f06991","Type":"ContainerDied","Data":"4acb185e41834d8812e96e916178d07fe1048964a71dc09f90ca4eccd35002fc"} Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.990468 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmbbk" Jan 27 06:51:21 crc kubenswrapper[4729]: I0127 06:51:21.990469 4729 scope.go:117] "RemoveContainer" containerID="de217d23f5fa3e267a675013fb8e3ab9beba13c8ee5d3ac6171d06ba08aae673" Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.030109 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmbbk"] Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.033822 4729 scope.go:117] "RemoveContainer" containerID="bdc5a11f954523c3ba3f161f292f8e855641eafe9522fddc57d768ddb7d467ef" Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.040396 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vmbbk"] Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.054562 4729 scope.go:117] "RemoveContainer" containerID="0a4607043bbd666cf2865761002088e1d0c377bb619b4f821a053e87d949628c" Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.054700 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-978h9" Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.093881 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.093940 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.158920 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.375357 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" path="/var/lib/kubelet/pods/ef199f3e-dc03-4397-91ce-9da605f06991/volumes" Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.909372 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bb6785bdf-n4pvt"] Jan 27 06:51:22 crc kubenswrapper[4729]: I0127 06:51:22.909579 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" podUID="c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" containerName="controller-manager" containerID="cri-o://d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2" gracePeriod=30 Jan 27 06:51:23 crc kubenswrapper[4729]: I0127 06:51:23.009702 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv"] Jan 27 06:51:23 crc kubenswrapper[4729]: I0127 06:51:23.009895 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" podUID="b247edce-76c1-48ba-954f-c870ba476d9b" containerName="route-controller-manager" containerID="cri-o://e00abb39a9628e382877810e21c20fdfa07c9a3c58968519f9ce080bf41ae23a" gracePeriod=30 Jan 27 06:51:23 crc kubenswrapper[4729]: I0127 06:51:23.049184 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:51:23 crc kubenswrapper[4729]: I0127 06:51:23.968946 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.001756 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64b598cf6d-5dcjc"] Jan 27 06:51:24 crc kubenswrapper[4729]: E0127 06:51:24.001970 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" containerName="extract-content" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.001988 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" containerName="extract-content" Jan 27 06:51:24 crc kubenswrapper[4729]: E0127 06:51:24.001999 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerName="extract-content" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.002007 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerName="extract-content" Jan 27 06:51:24 crc kubenswrapper[4729]: E0127 06:51:24.002022 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerName="registry-server" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.002030 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerName="registry-server" Jan 27 06:51:24 crc kubenswrapper[4729]: E0127 06:51:24.002040 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" containerName="registry-server" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.002048 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" containerName="registry-server" Jan 27 06:51:24 crc kubenswrapper[4729]: E0127 06:51:24.002071 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" containerName="extract-utilities" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.002153 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" containerName="extract-utilities" Jan 27 06:51:24 crc kubenswrapper[4729]: E0127 06:51:24.002163 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerName="extract-utilities" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.002170 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerName="extract-utilities" Jan 27 06:51:24 crc kubenswrapper[4729]: E0127 06:51:24.002178 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" containerName="controller-manager" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.002185 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" containerName="controller-manager" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.002305 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e500073e-46f7-4fa1-aaf9-f99824cefcc3" containerName="registry-server" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.002322 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef199f3e-dc03-4397-91ce-9da605f06991" containerName="registry-server" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.002332 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" containerName="controller-manager" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.002717 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.004222 4729 generic.go:334] "Generic (PLEG): container finished" podID="c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" containerID="d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2" exitCode=0 Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.004311 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" event={"ID":"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9","Type":"ContainerDied","Data":"d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2"} Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.004339 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" event={"ID":"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9","Type":"ContainerDied","Data":"daca4f74ae78a15eb96d4a2d4f0a9ee6d334051533e78d8901edec13afa44e7b"} Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.004361 4729 scope.go:117] "RemoveContainer" containerID="d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.004444 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bb6785bdf-n4pvt" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.007052 4729 generic.go:334] "Generic (PLEG): container finished" podID="b247edce-76c1-48ba-954f-c870ba476d9b" containerID="e00abb39a9628e382877810e21c20fdfa07c9a3c58968519f9ce080bf41ae23a" exitCode=0 Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.007137 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" event={"ID":"b247edce-76c1-48ba-954f-c870ba476d9b","Type":"ContainerDied","Data":"e00abb39a9628e382877810e21c20fdfa07c9a3c58968519f9ce080bf41ae23a"} Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.017319 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64b598cf6d-5dcjc"] Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.026293 4729 scope.go:117] "RemoveContainer" containerID="d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2" Jan 27 06:51:24 crc kubenswrapper[4729]: E0127 06:51:24.027799 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2\": container with ID starting with d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2 not found: ID does not exist" containerID="d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.027843 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2"} err="failed to get container status \"d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2\": rpc error: code = NotFound desc = could not find container \"d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2\": container with ID starting with d62aa516056dea94c08f6c75a0b0cc5a94a7a8bd489aa7b3ab435b0909150dc2 not found: ID does not exist" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.042658 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.062801 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4b82\" (UniqueName: \"kubernetes.io/projected/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-kube-api-access-z4b82\") pod \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.062856 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-client-ca\") pod \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.062889 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-proxy-ca-bundles\") pod \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.062942 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-serving-cert\") pod \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.063011 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-config\") pod \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\" (UID: \"c4cb5e75-6f68-4572-9b64-6f7faeb18fe9\") " Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.063241 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b23de747-248c-45de-b3de-38669f6a5263-client-ca\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.063264 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23de747-248c-45de-b3de-38669f6a5263-config\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.063279 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b23de747-248c-45de-b3de-38669f6a5263-proxy-ca-bundles\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.063321 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23de747-248c-45de-b3de-38669f6a5263-serving-cert\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.063349 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/b23de747-248c-45de-b3de-38669f6a5263-kube-api-access-x7bhq\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.066909 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" (UID: "c4cb5e75-6f68-4572-9b64-6f7faeb18fe9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.067394 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-config" (OuterVolumeSpecName: "config") pod "c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" (UID: "c4cb5e75-6f68-4572-9b64-6f7faeb18fe9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.069392 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-kube-api-access-z4b82" (OuterVolumeSpecName: "kube-api-access-z4b82") pod "c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" (UID: "c4cb5e75-6f68-4572-9b64-6f7faeb18fe9"). InnerVolumeSpecName "kube-api-access-z4b82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.070461 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-client-ca" (OuterVolumeSpecName: "client-ca") pod "c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" (UID: "c4cb5e75-6f68-4572-9b64-6f7faeb18fe9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.085718 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" (UID: "c4cb5e75-6f68-4572-9b64-6f7faeb18fe9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164402 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-config\") pod \"b247edce-76c1-48ba-954f-c870ba476d9b\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164466 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-client-ca\") pod \"b247edce-76c1-48ba-954f-c870ba476d9b\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164511 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b247edce-76c1-48ba-954f-c870ba476d9b-serving-cert\") pod \"b247edce-76c1-48ba-954f-c870ba476d9b\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164573 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kgfz\" (UniqueName: \"kubernetes.io/projected/b247edce-76c1-48ba-954f-c870ba476d9b-kube-api-access-7kgfz\") pod \"b247edce-76c1-48ba-954f-c870ba476d9b\" (UID: \"b247edce-76c1-48ba-954f-c870ba476d9b\") " Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164801 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b23de747-248c-45de-b3de-38669f6a5263-client-ca\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164827 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23de747-248c-45de-b3de-38669f6a5263-config\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164842 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b23de747-248c-45de-b3de-38669f6a5263-proxy-ca-bundles\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164878 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23de747-248c-45de-b3de-38669f6a5263-serving-cert\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164905 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/b23de747-248c-45de-b3de-38669f6a5263-kube-api-access-x7bhq\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164945 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164957 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4b82\" (UniqueName: \"kubernetes.io/projected/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-kube-api-access-z4b82\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164968 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164979 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.164987 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.165182 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-client-ca" (OuterVolumeSpecName: "client-ca") pod "b247edce-76c1-48ba-954f-c870ba476d9b" (UID: "b247edce-76c1-48ba-954f-c870ba476d9b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.165215 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-config" (OuterVolumeSpecName: "config") pod "b247edce-76c1-48ba-954f-c870ba476d9b" (UID: "b247edce-76c1-48ba-954f-c870ba476d9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.166090 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b23de747-248c-45de-b3de-38669f6a5263-client-ca\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.166458 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23de747-248c-45de-b3de-38669f6a5263-config\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.166722 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b23de747-248c-45de-b3de-38669f6a5263-proxy-ca-bundles\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.170174 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23de747-248c-45de-b3de-38669f6a5263-serving-cert\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.170387 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b247edce-76c1-48ba-954f-c870ba476d9b-kube-api-access-7kgfz" (OuterVolumeSpecName: "kube-api-access-7kgfz") pod "b247edce-76c1-48ba-954f-c870ba476d9b" (UID: "b247edce-76c1-48ba-954f-c870ba476d9b"). InnerVolumeSpecName "kube-api-access-7kgfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.170438 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b247edce-76c1-48ba-954f-c870ba476d9b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b247edce-76c1-48ba-954f-c870ba476d9b" (UID: "b247edce-76c1-48ba-954f-c870ba476d9b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.193036 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/b23de747-248c-45de-b3de-38669f6a5263-kube-api-access-x7bhq\") pod \"controller-manager-64b598cf6d-5dcjc\" (UID: \"b23de747-248c-45de-b3de-38669f6a5263\") " pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.266307 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.266356 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b247edce-76c1-48ba-954f-c870ba476d9b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.266371 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b247edce-76c1-48ba-954f-c870ba476d9b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.266384 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kgfz\" (UniqueName: \"kubernetes.io/projected/b247edce-76c1-48ba-954f-c870ba476d9b-kube-api-access-7kgfz\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.336895 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bb6785bdf-n4pvt"] Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.341119 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bb6785bdf-n4pvt"] Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.369335 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4cb5e75-6f68-4572-9b64-6f7faeb18fe9" path="/var/lib/kubelet/pods/c4cb5e75-6f68-4572-9b64-6f7faeb18fe9/volumes" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.403908 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.785645 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.796942 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lbbks"] Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.830584 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.893270 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64b598cf6d-5dcjc"] Jan 27 06:51:24 crc kubenswrapper[4729]: W0127 06:51:24.896635 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23de747_248c_45de_b3de_38669f6a5263.slice/crio-f9b339e72ffda5a14db9f170582b20995728cb03a4f9edb054cda388d54feeb0 WatchSource:0}: Error finding container f9b339e72ffda5a14db9f170582b20995728cb03a4f9edb054cda388d54feeb0: Status 404 returned error can't find the container with id f9b339e72ffda5a14db9f170582b20995728cb03a4f9edb054cda388d54feeb0 Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.933635 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5"] Jan 27 06:51:24 crc kubenswrapper[4729]: E0127 06:51:24.934098 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b247edce-76c1-48ba-954f-c870ba476d9b" containerName="route-controller-manager" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.934194 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b247edce-76c1-48ba-954f-c870ba476d9b" containerName="route-controller-manager" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.934407 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b247edce-76c1-48ba-954f-c870ba476d9b" containerName="route-controller-manager" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.934913 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:24 crc kubenswrapper[4729]: I0127 06:51:24.946677 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5"] Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.014021 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" event={"ID":"b247edce-76c1-48ba-954f-c870ba476d9b","Type":"ContainerDied","Data":"e70bf4ca84f987816b207943de4814ec6a810ff4dd86ed7d5b0b3ac235ff1006"} Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.014062 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.014102 4729 scope.go:117] "RemoveContainer" containerID="e00abb39a9628e382877810e21c20fdfa07c9a3c58968519f9ce080bf41ae23a" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.015153 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" event={"ID":"b23de747-248c-45de-b3de-38669f6a5263","Type":"ContainerStarted","Data":"f9b339e72ffda5a14db9f170582b20995728cb03a4f9edb054cda388d54feeb0"} Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.015204 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lbbks" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerName="registry-server" containerID="cri-o://20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a" gracePeriod=2 Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.087715 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv"] Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.090403 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386c07f5-5176-483a-92f3-9dfdaabbfbcd-client-ca\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.090525 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzjrw\" (UniqueName: \"kubernetes.io/projected/386c07f5-5176-483a-92f3-9dfdaabbfbcd-kube-api-access-qzjrw\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.090595 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c07f5-5176-483a-92f3-9dfdaabbfbcd-config\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.090634 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386c07f5-5176-483a-92f3-9dfdaabbfbcd-serving-cert\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.092445 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4689fd6c-7qjgv"] Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.191724 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386c07f5-5176-483a-92f3-9dfdaabbfbcd-serving-cert\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.191790 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386c07f5-5176-483a-92f3-9dfdaabbfbcd-client-ca\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.191859 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzjrw\" (UniqueName: \"kubernetes.io/projected/386c07f5-5176-483a-92f3-9dfdaabbfbcd-kube-api-access-qzjrw\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.191892 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c07f5-5176-483a-92f3-9dfdaabbfbcd-config\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.192860 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c07f5-5176-483a-92f3-9dfdaabbfbcd-config\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.194533 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386c07f5-5176-483a-92f3-9dfdaabbfbcd-client-ca\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.198678 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386c07f5-5176-483a-92f3-9dfdaabbfbcd-serving-cert\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.212282 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzjrw\" (UniqueName: \"kubernetes.io/projected/386c07f5-5176-483a-92f3-9dfdaabbfbcd-kube-api-access-qzjrw\") pod \"route-controller-manager-664ff66f7f-l6xs5\" (UID: \"386c07f5-5176-483a-92f3-9dfdaabbfbcd\") " pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.377335 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.830667 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5"] Jan 27 06:51:25 crc kubenswrapper[4729]: W0127 06:51:25.843593 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod386c07f5_5176_483a_92f3_9dfdaabbfbcd.slice/crio-3c9a7757e9740d3ffd3a46490d8eb57a4afe118af1e09ec7f39ebd1f337d7357 WatchSource:0}: Error finding container 3c9a7757e9740d3ffd3a46490d8eb57a4afe118af1e09ec7f39ebd1f337d7357: Status 404 returned error can't find the container with id 3c9a7757e9740d3ffd3a46490d8eb57a4afe118af1e09ec7f39ebd1f337d7357 Jan 27 06:51:25 crc kubenswrapper[4729]: I0127 06:51:25.943239 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.022455 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" event={"ID":"b23de747-248c-45de-b3de-38669f6a5263","Type":"ContainerStarted","Data":"25b4e3959182ea24cfb6a517a37338bf11ecc3cf078ffc006ba9f508af903a4e"} Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.022766 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.026516 4729 generic.go:334] "Generic (PLEG): container finished" podID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerID="20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a" exitCode=0 Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.026585 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbbks" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.026630 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbbks" event={"ID":"7ff397c2-8ed9-4073-ae6c-8600c382f227","Type":"ContainerDied","Data":"20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a"} Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.027185 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbbks" event={"ID":"7ff397c2-8ed9-4073-ae6c-8600c382f227","Type":"ContainerDied","Data":"6e3b34869de6fd4e214ad90c36664b2952189b920ced85843108dbc4bd97132b"} Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.027218 4729 scope.go:117] "RemoveContainer" containerID="20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.029689 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" event={"ID":"386c07f5-5176-483a-92f3-9dfdaabbfbcd","Type":"ContainerStarted","Data":"3c9a7757e9740d3ffd3a46490d8eb57a4afe118af1e09ec7f39ebd1f337d7357"} Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.033984 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.042100 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64b598cf6d-5dcjc" podStartSLOduration=4.042084352 podStartE2EDuration="4.042084352s" podCreationTimestamp="2026-01-27 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:51:26.040450381 +0000 UTC m=+251.107571654" watchObservedRunningTime="2026-01-27 06:51:26.042084352 +0000 UTC m=+251.109205625" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.052694 4729 scope.go:117] "RemoveContainer" containerID="fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.126377 4729 scope.go:117] "RemoveContainer" containerID="76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.133996 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-utilities\") pod \"7ff397c2-8ed9-4073-ae6c-8600c382f227\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.134162 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-catalog-content\") pod \"7ff397c2-8ed9-4073-ae6c-8600c382f227\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.134210 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jh62\" (UniqueName: \"kubernetes.io/projected/7ff397c2-8ed9-4073-ae6c-8600c382f227-kube-api-access-9jh62\") pod \"7ff397c2-8ed9-4073-ae6c-8600c382f227\" (UID: \"7ff397c2-8ed9-4073-ae6c-8600c382f227\") " Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.138972 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-utilities" (OuterVolumeSpecName: "utilities") pod "7ff397c2-8ed9-4073-ae6c-8600c382f227" (UID: "7ff397c2-8ed9-4073-ae6c-8600c382f227"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.145436 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff397c2-8ed9-4073-ae6c-8600c382f227-kube-api-access-9jh62" (OuterVolumeSpecName: "kube-api-access-9jh62") pod "7ff397c2-8ed9-4073-ae6c-8600c382f227" (UID: "7ff397c2-8ed9-4073-ae6c-8600c382f227"). InnerVolumeSpecName "kube-api-access-9jh62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.173341 4729 scope.go:117] "RemoveContainer" containerID="20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a" Jan 27 06:51:26 crc kubenswrapper[4729]: E0127 06:51:26.173917 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a\": container with ID starting with 20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a not found: ID does not exist" containerID="20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.173945 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a"} err="failed to get container status \"20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a\": rpc error: code = NotFound desc = could not find container \"20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a\": container with ID starting with 20f9d44e7c96134cea6a161acdd55e1ec9181e78389a5d211c7ad1921131073a not found: ID does not exist" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.173987 4729 scope.go:117] "RemoveContainer" containerID="fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102" Jan 27 06:51:26 crc kubenswrapper[4729]: E0127 06:51:26.174213 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102\": container with ID starting with fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102 not found: ID does not exist" containerID="fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.174260 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102"} err="failed to get container status \"fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102\": rpc error: code = NotFound desc = could not find container \"fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102\": container with ID starting with fadb040c158d9f3ac183659c69d740f91ae3b5bcc4e0f71de4090c377e62f102 not found: ID does not exist" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.174303 4729 scope.go:117] "RemoveContainer" containerID="76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7" Jan 27 06:51:26 crc kubenswrapper[4729]: E0127 06:51:26.174586 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7\": container with ID starting with 76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7 not found: ID does not exist" containerID="76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.174615 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7"} err="failed to get container status \"76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7\": rpc error: code = NotFound desc = could not find container \"76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7\": container with ID starting with 76d5bd9804ddc652eaac51aa5fc6e1bd377186601d345207bba74e39bf7d48d7 not found: ID does not exist" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.205697 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ff397c2-8ed9-4073-ae6c-8600c382f227" (UID: "7ff397c2-8ed9-4073-ae6c-8600c382f227"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.235973 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.236011 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jh62\" (UniqueName: \"kubernetes.io/projected/7ff397c2-8ed9-4073-ae6c-8600c382f227-kube-api-access-9jh62\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.236024 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff397c2-8ed9-4073-ae6c-8600c382f227-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.378327 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b247edce-76c1-48ba-954f-c870ba476d9b" path="/var/lib/kubelet/pods/b247edce-76c1-48ba-954f-c870ba476d9b/volumes" Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.379280 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lbbks"] Jan 27 06:51:26 crc kubenswrapper[4729]: I0127 06:51:26.379325 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lbbks"] Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.039256 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" event={"ID":"386c07f5-5176-483a-92f3-9dfdaabbfbcd","Type":"ContainerStarted","Data":"d88454be254b3728c78f38d98ecb45b42898210978bf432134833977ef1d8bb4"} Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.057833 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" podStartSLOduration=4.057815653 podStartE2EDuration="4.057815653s" podCreationTimestamp="2026-01-27 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:51:27.056393249 +0000 UTC m=+252.123514552" watchObservedRunningTime="2026-01-27 06:51:27.057815653 +0000 UTC m=+252.124936926" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.479329 4729 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.480865 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082" gracePeriod=15 Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.480898 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3" gracePeriod=15 Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.480966 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2" gracePeriod=15 Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.481007 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56" gracePeriod=15 Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.481158 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4" gracePeriod=15 Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486051 4729 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486556 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486587 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486614 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486630 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486652 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerName="extract-utilities" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486669 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerName="extract-utilities" Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486695 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486711 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486732 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerName="extract-content" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486747 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerName="extract-content" Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486769 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486787 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486810 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486825 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486852 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486867 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486883 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerName="registry-server" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486898 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerName="registry-server" Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486919 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486933 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.486960 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.486978 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.488608 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.488641 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.488671 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" containerName="registry-server" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.488690 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.488710 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.488727 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.488766 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.488786 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.498990 4729 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.500424 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.508183 4729 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.574150 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.656041 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.656125 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.656189 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.656211 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.656277 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.656317 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.656341 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.656411 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757402 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757490 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757566 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757596 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757647 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757686 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757714 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757743 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757877 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757931 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.757971 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.758011 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.758051 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.758116 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.758155 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.758189 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: I0127 06:51:27.852172 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:27 crc kubenswrapper[4729]: W0127 06:51:27.871034 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-608613cd7a433600ca3c80830cc2de7ac2ed1381478f1b77ad648bbd876f9899 WatchSource:0}: Error finding container 608613cd7a433600ca3c80830cc2de7ac2ed1381478f1b77ad648bbd876f9899: Status 404 returned error can't find the container with id 608613cd7a433600ca3c80830cc2de7ac2ed1381478f1b77ad648bbd876f9899 Jan 27 06:51:27 crc kubenswrapper[4729]: E0127 06:51:27.875759 4729 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e83dcc8069bca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:51:27.87510369 +0000 UTC m=+252.942224963,LastTimestamp:2026-01-27 06:51:27.87510369 +0000 UTC m=+252.942224963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.049120 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.051299 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.053153 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3" exitCode=0 Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.053179 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56" exitCode=0 Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.053188 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2" exitCode=0 Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.053199 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4" exitCode=2 Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.053262 4729 scope.go:117] "RemoveContainer" containerID="65384284b1303384500fa9e294a567fe2ba9b470577e1be7442ff53d33ce9066" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.055038 4729 generic.go:334] "Generic (PLEG): container finished" podID="13f1651f-1170-4455-bbb3-bcf62eb786b7" containerID="3872c471aa8d17b5f3bf76dd14ba61766892a7da9beffe0411a132eab4072393" exitCode=0 Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.055123 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"13f1651f-1170-4455-bbb3-bcf62eb786b7","Type":"ContainerDied","Data":"3872c471aa8d17b5f3bf76dd14ba61766892a7da9beffe0411a132eab4072393"} Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.055893 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.056063 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"608613cd7a433600ca3c80830cc2de7ac2ed1381478f1b77ad648bbd876f9899"} Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.056213 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.056308 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.060917 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.061335 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.061565 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.061954 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:28 crc kubenswrapper[4729]: I0127 06:51:28.373534 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff397c2-8ed9-4073-ae6c-8600c382f227" path="/var/lib/kubelet/pods/7ff397c2-8ed9-4073-ae6c-8600c382f227/volumes" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.061975 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1"} Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.063420 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.063740 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.064230 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.065829 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.435586 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.436096 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.436409 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.436854 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:29 crc kubenswrapper[4729]: E0127 06:51:29.545826 4729 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e83dcc8069bca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:51:27.87510369 +0000 UTC m=+252.942224963,LastTimestamp:2026-01-27 06:51:27.87510369 +0000 UTC m=+252.942224963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.615225 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-var-lock\") pod \"13f1651f-1170-4455-bbb3-bcf62eb786b7\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.615807 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13f1651f-1170-4455-bbb3-bcf62eb786b7-kube-api-access\") pod \"13f1651f-1170-4455-bbb3-bcf62eb786b7\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.615904 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-kubelet-dir\") pod \"13f1651f-1170-4455-bbb3-bcf62eb786b7\" (UID: \"13f1651f-1170-4455-bbb3-bcf62eb786b7\") " Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.615682 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-var-lock" (OuterVolumeSpecName: "var-lock") pod "13f1651f-1170-4455-bbb3-bcf62eb786b7" (UID: "13f1651f-1170-4455-bbb3-bcf62eb786b7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.616524 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "13f1651f-1170-4455-bbb3-bcf62eb786b7" (UID: "13f1651f-1170-4455-bbb3-bcf62eb786b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.649198 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f1651f-1170-4455-bbb3-bcf62eb786b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "13f1651f-1170-4455-bbb3-bcf62eb786b7" (UID: "13f1651f-1170-4455-bbb3-bcf62eb786b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.717340 4729 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.717379 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13f1651f-1170-4455-bbb3-bcf62eb786b7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.717393 4729 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13f1651f-1170-4455-bbb3-bcf62eb786b7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.880483 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.881393 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.881813 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.882031 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.882298 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:29 crc kubenswrapper[4729]: I0127 06:51:29.882759 4729 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.021475 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.021571 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.021601 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.021748 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.021792 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.021806 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.022038 4729 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.022061 4729 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.022101 4729 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.072366 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.072365 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"13f1651f-1170-4455-bbb3-bcf62eb786b7","Type":"ContainerDied","Data":"3d725586f46b49789888e1fbcb40fb8827a33a66fd1f1dfbd886cc39643d26eb"} Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.072524 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d725586f46b49789888e1fbcb40fb8827a33a66fd1f1dfbd886cc39643d26eb" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.075142 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.076301 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082" exitCode=0 Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.076528 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.077138 4729 scope.go:117] "RemoveContainer" containerID="29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.094910 4729 scope.go:117] "RemoveContainer" containerID="2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.095707 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.096273 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.096561 4729 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.096820 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.097993 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.098318 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.098625 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.098931 4729 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.106986 4729 scope.go:117] "RemoveContainer" containerID="e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.120173 4729 scope.go:117] "RemoveContainer" containerID="01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.135389 4729 scope.go:117] "RemoveContainer" containerID="7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.146919 4729 scope.go:117] "RemoveContainer" containerID="923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.150466 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" containerName="oauth-openshift" containerID="cri-o://3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537" gracePeriod=15 Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.172654 4729 scope.go:117] "RemoveContainer" containerID="29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3" Jan 27 06:51:30 crc kubenswrapper[4729]: E0127 06:51:30.173844 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\": container with ID starting with 29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3 not found: ID does not exist" containerID="29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.173999 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3"} err="failed to get container status \"29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\": rpc error: code = NotFound desc = could not find container \"29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3\": container with ID starting with 29b4c828be00ac2a18e2f47b25cec5b5e6ab85933578b7507793e21f3d57c1e3 not found: ID does not exist" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.174151 4729 scope.go:117] "RemoveContainer" containerID="2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56" Jan 27 06:51:30 crc kubenswrapper[4729]: E0127 06:51:30.174461 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\": container with ID starting with 2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56 not found: ID does not exist" containerID="2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.174492 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56"} err="failed to get container status \"2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\": rpc error: code = NotFound desc = could not find container \"2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56\": container with ID starting with 2c44a9193f64fd595f649d63223e79d2680eac4e8b42f6036913f5b1213dbe56 not found: ID does not exist" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.174510 4729 scope.go:117] "RemoveContainer" containerID="e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2" Jan 27 06:51:30 crc kubenswrapper[4729]: E0127 06:51:30.174718 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\": container with ID starting with e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2 not found: ID does not exist" containerID="e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.174745 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2"} err="failed to get container status \"e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\": rpc error: code = NotFound desc = could not find container \"e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2\": container with ID starting with e13239a1e88c0fb85dab1f7bef1f3b08daf7d4edbff33818f070ee28774d80c2 not found: ID does not exist" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.174762 4729 scope.go:117] "RemoveContainer" containerID="01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4" Jan 27 06:51:30 crc kubenswrapper[4729]: E0127 06:51:30.175119 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\": container with ID starting with 01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4 not found: ID does not exist" containerID="01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.175145 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4"} err="failed to get container status \"01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\": rpc error: code = NotFound desc = could not find container \"01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4\": container with ID starting with 01990dc3389f20d01c765db8484451ad48ddef8633a24d62c64a323febec9aa4 not found: ID does not exist" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.175162 4729 scope.go:117] "RemoveContainer" containerID="7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082" Jan 27 06:51:30 crc kubenswrapper[4729]: E0127 06:51:30.175410 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\": container with ID starting with 7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082 not found: ID does not exist" containerID="7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.175536 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082"} err="failed to get container status \"7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\": rpc error: code = NotFound desc = could not find container \"7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082\": container with ID starting with 7c6896e23271893799556e30bf721661378696b536e44279d4f13a2a846ee082 not found: ID does not exist" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.175641 4729 scope.go:117] "RemoveContainer" containerID="923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05" Jan 27 06:51:30 crc kubenswrapper[4729]: E0127 06:51:30.176252 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\": container with ID starting with 923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05 not found: ID does not exist" containerID="923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.176304 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05"} err="failed to get container status \"923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\": rpc error: code = NotFound desc = could not find container \"923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05\": container with ID starting with 923388e752aaaa687d49961074485a18e73f4970160eb7e4a6df48a199e33c05 not found: ID does not exist" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.373567 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.500540 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.500813 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.500979 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.501178 4729 status_manager.go:851] "Failed to get status for pod" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fb2x5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.501426 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630605 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-login\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630650 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4k7k\" (UniqueName: \"kubernetes.io/projected/6c9819c6-6d83-4ef1-94bd-038e573864d9-kube-api-access-w4k7k\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630668 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-idp-0-file-data\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630687 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-service-ca\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630704 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-serving-cert\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630727 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-trusted-ca-bundle\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630755 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-ocp-branding-template\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630778 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-cliconfig\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630802 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-dir\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630824 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-error\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630840 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-provider-selection\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630860 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-policies\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630887 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-router-certs\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.630924 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-session\") pod \"6c9819c6-6d83-4ef1-94bd-038e573864d9\" (UID: \"6c9819c6-6d83-4ef1-94bd-038e573864d9\") " Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.633622 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.633653 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.633628 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.638164 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.638586 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.638794 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.639138 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.639739 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.640113 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.640745 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.641358 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.641519 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.641764 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.643476 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9819c6-6d83-4ef1-94bd-038e573864d9-kube-api-access-w4k7k" (OuterVolumeSpecName: "kube-api-access-w4k7k") pod "6c9819c6-6d83-4ef1-94bd-038e573864d9" (UID: "6c9819c6-6d83-4ef1-94bd-038e573864d9"). InnerVolumeSpecName "kube-api-access-w4k7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731747 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731779 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731789 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731801 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4k7k\" (UniqueName: \"kubernetes.io/projected/6c9819c6-6d83-4ef1-94bd-038e573864d9-kube-api-access-w4k7k\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731811 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731820 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731831 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731840 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731849 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731858 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731868 4729 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731877 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731886 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c9819c6-6d83-4ef1-94bd-038e573864d9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:30 crc kubenswrapper[4729]: I0127 06:51:30.731894 4729 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c9819c6-6d83-4ef1-94bd-038e573864d9-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.085049 4729 generic.go:334] "Generic (PLEG): container finished" podID="6c9819c6-6d83-4ef1-94bd-038e573864d9" containerID="3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537" exitCode=0 Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.085277 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" event={"ID":"6c9819c6-6d83-4ef1-94bd-038e573864d9","Type":"ContainerDied","Data":"3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537"} Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.085386 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" event={"ID":"6c9819c6-6d83-4ef1-94bd-038e573864d9","Type":"ContainerDied","Data":"addbe644ad1771df0a789973d2c3f612a74b635f0edeace76682f13127ca9e99"} Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.085412 4729 scope.go:117] "RemoveContainer" containerID="3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.086284 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.086944 4729 status_manager.go:851] "Failed to get status for pod" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fb2x5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.087280 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.087473 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.087614 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.109409 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.109935 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.110429 4729 status_manager.go:851] "Failed to get status for pod" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fb2x5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.110631 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.111980 4729 scope.go:117] "RemoveContainer" containerID="3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537" Jan 27 06:51:31 crc kubenswrapper[4729]: E0127 06:51:31.112342 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537\": container with ID starting with 3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537 not found: ID does not exist" containerID="3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537" Jan 27 06:51:31 crc kubenswrapper[4729]: I0127 06:51:31.112372 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537"} err="failed to get container status \"3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537\": rpc error: code = NotFound desc = could not find container \"3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537\": container with ID starting with 3684de4a3eea7272595ff458e726dd21495796188896ce82aa668806984f7537 not found: ID does not exist" Jan 27 06:51:35 crc kubenswrapper[4729]: E0127 06:51:35.006691 4729 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:35 crc kubenswrapper[4729]: E0127 06:51:35.006951 4729 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:35 crc kubenswrapper[4729]: E0127 06:51:35.007234 4729 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:35 crc kubenswrapper[4729]: E0127 06:51:35.007466 4729 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:35 crc kubenswrapper[4729]: E0127 06:51:35.007675 4729 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:35 crc kubenswrapper[4729]: I0127 06:51:35.007698 4729 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 06:51:35 crc kubenswrapper[4729]: E0127 06:51:35.007932 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="200ms" Jan 27 06:51:35 crc kubenswrapper[4729]: E0127 06:51:35.209841 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="400ms" Jan 27 06:51:35 crc kubenswrapper[4729]: E0127 06:51:35.612061 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="800ms" Jan 27 06:51:36 crc kubenswrapper[4729]: I0127 06:51:36.367030 4729 status_manager.go:851] "Failed to get status for pod" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fb2x5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:36 crc kubenswrapper[4729]: I0127 06:51:36.368258 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:36 crc kubenswrapper[4729]: I0127 06:51:36.368687 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:36 crc kubenswrapper[4729]: I0127 06:51:36.368920 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:36 crc kubenswrapper[4729]: E0127 06:51:36.413182 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="1.6s" Jan 27 06:51:38 crc kubenswrapper[4729]: E0127 06:51:38.014366 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="3.2s" Jan 27 06:51:38 crc kubenswrapper[4729]: E0127 06:51:38.421849 4729 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" volumeName="registry-storage" Jan 27 06:51:39 crc kubenswrapper[4729]: E0127 06:51:39.547399 4729 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e83dcc8069bca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:51:27.87510369 +0000 UTC m=+252.942224963,LastTimestamp:2026-01-27 06:51:27.87510369 +0000 UTC m=+252.942224963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.151289 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.151738 4729 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2" exitCode=1 Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.151789 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2"} Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.152637 4729 scope.go:117] "RemoveContainer" containerID="ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.153175 4729 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.155883 4729 status_manager.go:851] "Failed to get status for pod" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fb2x5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.156458 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.157005 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.158650 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:41 crc kubenswrapper[4729]: E0127 06:51:41.216405 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="6.4s" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.362203 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.363905 4729 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.364141 4729 status_manager.go:851] "Failed to get status for pod" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fb2x5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.364308 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.364457 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.364729 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.376741 4729 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f3ce9174-ae55-4e97-8dd6-96c11ac10b59" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.376796 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f3ce9174-ae55-4e97-8dd6-96c11ac10b59" Jan 27 06:51:41 crc kubenswrapper[4729]: E0127 06:51:41.377389 4729 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:41 crc kubenswrapper[4729]: I0127 06:51:41.377717 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:41 crc kubenswrapper[4729]: W0127 06:51:41.403711 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-6ed4cfcd7be2128eca1756ebd261c2bef61589130d6188483741004fcc746aab WatchSource:0}: Error finding container 6ed4cfcd7be2128eca1756ebd261c2bef61589130d6188483741004fcc746aab: Status 404 returned error can't find the container with id 6ed4cfcd7be2128eca1756ebd261c2bef61589130d6188483741004fcc746aab Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.163471 4729 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="254e276e445d4c17cd80132d13637ed1cdfbb8823d1a7915c87c9e295d5f840e" exitCode=0 Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.163584 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"254e276e445d4c17cd80132d13637ed1cdfbb8823d1a7915c87c9e295d5f840e"} Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.165223 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6ed4cfcd7be2128eca1756ebd261c2bef61589130d6188483741004fcc746aab"} Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.165762 4729 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f3ce9174-ae55-4e97-8dd6-96c11ac10b59" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.165810 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f3ce9174-ae55-4e97-8dd6-96c11ac10b59" Jan 27 06:51:42 crc kubenswrapper[4729]: E0127 06:51:42.166568 4729 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.166564 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.167190 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.167707 4729 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.168288 4729 status_manager.go:851] "Failed to get status for pod" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fb2x5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.168637 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.170850 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.170930 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"610f449015358452a540ca9c92975d520b1740697f220bf76c118c7efa06faf3"} Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.171793 4729 status_manager.go:851] "Failed to get status for pod" podUID="386c07f5-5176-483a-92f3-9dfdaabbfbcd" pod="openshift-route-controller-manager/route-controller-manager-664ff66f7f-l6xs5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-664ff66f7f-l6xs5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.172222 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.172644 4729 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.173159 4729 status_manager.go:851] "Failed to get status for pod" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" pod="openshift-authentication/oauth-openshift-558db77b4-fb2x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fb2x5\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:42 crc kubenswrapper[4729]: I0127 06:51:42.173439 4729 status_manager.go:851] "Failed to get status for pod" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 27 06:51:43 crc kubenswrapper[4729]: I0127 06:51:43.197275 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eca5d77142e31581f512fb029878a7f3146dd622f9686c0564fd1e77bdb2c22c"} Jan 27 06:51:43 crc kubenswrapper[4729]: I0127 06:51:43.197621 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"561bb02682958c2da6f2318c49d070fe83eebec5341362dc812f9e10d79f8c9e"} Jan 27 06:51:43 crc kubenswrapper[4729]: I0127 06:51:43.197637 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f6c7a81d44d3520d33c1d39f46002f084a16a4722b376e02ea2644203b61356a"} Jan 27 06:51:44 crc kubenswrapper[4729]: I0127 06:51:44.205103 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a5b4b7553314650d7c4f064d6c2b602b5ffe82134e3376be1829d62dca494c23"} Jan 27 06:51:44 crc kubenswrapper[4729]: I0127 06:51:44.205150 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"70ed417ab368524189a8d08ebf1437cad458527e3e321c6766c13d10cee6d347"} Jan 27 06:51:44 crc kubenswrapper[4729]: I0127 06:51:44.205954 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:44 crc kubenswrapper[4729]: I0127 06:51:44.206130 4729 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f3ce9174-ae55-4e97-8dd6-96c11ac10b59" Jan 27 06:51:44 crc kubenswrapper[4729]: I0127 06:51:44.206223 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f3ce9174-ae55-4e97-8dd6-96c11ac10b59" Jan 27 06:51:46 crc kubenswrapper[4729]: I0127 06:51:46.381486 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:46 crc kubenswrapper[4729]: I0127 06:51:46.381818 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:46 crc kubenswrapper[4729]: I0127 06:51:46.388662 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:47 crc kubenswrapper[4729]: I0127 06:51:47.586530 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:51:47 crc kubenswrapper[4729]: I0127 06:51:47.586735 4729 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 06:51:47 crc kubenswrapper[4729]: I0127 06:51:47.586803 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 06:51:47 crc kubenswrapper[4729]: I0127 06:51:47.824280 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.226809 4729 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.263297 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.263350 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.263384 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.263419 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.265680 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.266223 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.266259 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.275255 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.276479 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.281872 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.289385 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.293584 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.404403 4729 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4fc75491-549c-4390-9875-fbddb447de7b" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.485602 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.502321 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:51:49 crc kubenswrapper[4729]: I0127 06:51:49.512441 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:51:50 crc kubenswrapper[4729]: W0127 06:51:50.032944 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-5228e4b20eb9a7ae61fd8f33f19019a8cf50dfec6325f3f4bd9b7f861d965edb WatchSource:0}: Error finding container 5228e4b20eb9a7ae61fd8f33f19019a8cf50dfec6325f3f4bd9b7f861d965edb: Status 404 returned error can't find the container with id 5228e4b20eb9a7ae61fd8f33f19019a8cf50dfec6325f3f4bd9b7f861d965edb Jan 27 06:51:50 crc kubenswrapper[4729]: W0127 06:51:50.035183 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b9982bc07ff7ce5fb8361ca07996d269a8e578e94ae8952d3f0cbaae36e9c716 WatchSource:0}: Error finding container b9982bc07ff7ce5fb8361ca07996d269a8e578e94ae8952d3f0cbaae36e9c716: Status 404 returned error can't find the container with id b9982bc07ff7ce5fb8361ca07996d269a8e578e94ae8952d3f0cbaae36e9c716 Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.241525 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"867cc94e70bab47bea19eb46954f25a1062e54d4d617de576e18092cd86235d3"} Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.241587 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b9982bc07ff7ce5fb8361ca07996d269a8e578e94ae8952d3f0cbaae36e9c716"} Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.241726 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.242953 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"142e09a3c0e679513ad06539455deed71784544027dfb125e197e81b045537c7"} Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.242990 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"db20a422c5213993a1b6d49b5701bc413e5af1c84ce01fb70eb956916a271cad"} Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.245305 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"46800f664925a200d0f9b5931bf6a1203f4220bd11ec1b1f8db18686d8b2af85"} Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.245339 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5228e4b20eb9a7ae61fd8f33f19019a8cf50dfec6325f3f4bd9b7f861d965edb"} Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.245515 4729 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f3ce9174-ae55-4e97-8dd6-96c11ac10b59" Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.245532 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f3ce9174-ae55-4e97-8dd6-96c11ac10b59" Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.252040 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:51:50 crc kubenswrapper[4729]: I0127 06:51:50.264302 4729 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4fc75491-549c-4390-9875-fbddb447de7b" Jan 27 06:51:51 crc kubenswrapper[4729]: I0127 06:51:51.251542 4729 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f3ce9174-ae55-4e97-8dd6-96c11ac10b59" Jan 27 06:51:51 crc kubenswrapper[4729]: I0127 06:51:51.251844 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f3ce9174-ae55-4e97-8dd6-96c11ac10b59" Jan 27 06:51:51 crc kubenswrapper[4729]: I0127 06:51:51.255055 4729 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4fc75491-549c-4390-9875-fbddb447de7b" Jan 27 06:51:52 crc kubenswrapper[4729]: I0127 06:51:52.261736 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 27 06:51:52 crc kubenswrapper[4729]: I0127 06:51:52.262677 4729 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="46800f664925a200d0f9b5931bf6a1203f4220bd11ec1b1f8db18686d8b2af85" exitCode=255 Jan 27 06:51:52 crc kubenswrapper[4729]: I0127 06:51:52.262773 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"46800f664925a200d0f9b5931bf6a1203f4220bd11ec1b1f8db18686d8b2af85"} Jan 27 06:51:52 crc kubenswrapper[4729]: I0127 06:51:52.264590 4729 scope.go:117] "RemoveContainer" containerID="46800f664925a200d0f9b5931bf6a1203f4220bd11ec1b1f8db18686d8b2af85" Jan 27 06:51:53 crc kubenswrapper[4729]: I0127 06:51:53.269668 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 27 06:51:53 crc kubenswrapper[4729]: I0127 06:51:53.270210 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 27 06:51:53 crc kubenswrapper[4729]: I0127 06:51:53.270270 4729 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="36ea56e6a9d824076c687fc9a748a5765025514eedf67e448b154923154cfe18" exitCode=255 Jan 27 06:51:53 crc kubenswrapper[4729]: I0127 06:51:53.270309 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"36ea56e6a9d824076c687fc9a748a5765025514eedf67e448b154923154cfe18"} Jan 27 06:51:53 crc kubenswrapper[4729]: I0127 06:51:53.270354 4729 scope.go:117] "RemoveContainer" containerID="46800f664925a200d0f9b5931bf6a1203f4220bd11ec1b1f8db18686d8b2af85" Jan 27 06:51:53 crc kubenswrapper[4729]: I0127 06:51:53.270968 4729 scope.go:117] "RemoveContainer" containerID="36ea56e6a9d824076c687fc9a748a5765025514eedf67e448b154923154cfe18" Jan 27 06:51:53 crc kubenswrapper[4729]: E0127 06:51:53.271304 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:51:54 crc kubenswrapper[4729]: I0127 06:51:54.277839 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 27 06:51:57 crc kubenswrapper[4729]: I0127 06:51:57.587384 4729 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 06:51:57 crc kubenswrapper[4729]: I0127 06:51:57.587794 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 06:51:59 crc kubenswrapper[4729]: I0127 06:51:59.287007 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:51:59 crc kubenswrapper[4729]: I0127 06:51:59.427825 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 06:51:59 crc kubenswrapper[4729]: I0127 06:51:59.783136 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.081770 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.118411 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.283197 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.286335 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.313831 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.454169 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.508059 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.709912 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.737318 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.768518 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 06:52:00 crc kubenswrapper[4729]: I0127 06:52:00.998828 4729 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.043464 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.218332 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.337841 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.367291 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.532413 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.541732 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.574694 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.885370 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.892189 4729 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.893983 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.893953179 podStartE2EDuration="34.893953179s" podCreationTimestamp="2026-01-27 06:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:51:49.276327594 +0000 UTC m=+274.343448877" watchObservedRunningTime="2026-01-27 06:52:01.893953179 +0000 UTC m=+286.961074502" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.901491 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-fb2x5"] Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.901584 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.908761 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.919553 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.934949 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 06:52:01 crc kubenswrapper[4729]: I0127 06:52:01.936154 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.936129753 podStartE2EDuration="12.936129753s" podCreationTimestamp="2026-01-27 06:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:52:01.933634462 +0000 UTC m=+287.000755775" watchObservedRunningTime="2026-01-27 06:52:01.936129753 +0000 UTC m=+287.003251076" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.026824 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.043463 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.058519 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.106771 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.150402 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.152144 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.166968 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.172989 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.181514 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.309629 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.371296 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" path="/var/lib/kubelet/pods/6c9819c6-6d83-4ef1-94bd-038e573864d9/volumes" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.519203 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.549515 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.570432 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.599546 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.630025 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 06:52:02 crc kubenswrapper[4729]: I0127 06:52:02.702578 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.051712 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.116668 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.380118 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.453685 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.545557 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.547610 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.551239 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.559582 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.592095 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.680474 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.686652 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.808098 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.818666 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.901999 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 06:52:03 crc kubenswrapper[4729]: I0127 06:52:03.984211 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.166463 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.230465 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.268498 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.310572 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.330178 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.359765 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.410129 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.441509 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.472140 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.485169 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.485207 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.534183 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.574979 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.593346 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.599675 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.608013 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.867696 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.956557 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.961546 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 06:52:04 crc kubenswrapper[4729]: I0127 06:52:04.997386 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.013151 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.014182 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.055843 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.073586 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.173137 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.181862 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.198120 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.283685 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.314132 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.370499 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.395926 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.776350 4729 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.802720 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.902104 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 06:52:05 crc kubenswrapper[4729]: I0127 06:52:05.948387 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.001516 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.010745 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.025444 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.068252 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.286189 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.308941 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.384902 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.395135 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.442286 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.492498 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.517329 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.534105 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.542733 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.620586 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.623209 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.688986 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.745696 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.793123 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.848847 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.941484 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.944155 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 06:52:06 crc kubenswrapper[4729]: I0127 06:52:06.992380 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.059699 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.083718 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.160046 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.172274 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.201876 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.267534 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.275491 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.287224 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.287535 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.311439 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.331494 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.457408 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.473807 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.532118 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.568980 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.569365 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.569859 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.586382 4729 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.586451 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.586513 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.587349 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"610f449015358452a540ca9c92975d520b1740697f220bf76c118c7efa06faf3"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.587481 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://610f449015358452a540ca9c92975d520b1740697f220bf76c118c7efa06faf3" gracePeriod=30 Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.598593 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.647509 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.769832 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.814206 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.996055 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 06:52:07 crc kubenswrapper[4729]: I0127 06:52:07.997949 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.008015 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.122952 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.207709 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.244201 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.244234 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.275226 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.363043 4729 scope.go:117] "RemoveContainer" containerID="36ea56e6a9d824076c687fc9a748a5765025514eedf67e448b154923154cfe18" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.372492 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.457956 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.510814 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.557265 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.579227 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.591863 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.620060 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.709886 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.760617 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.776041 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.778592 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.820439 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.905072 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 06:52:08 crc kubenswrapper[4729]: I0127 06:52:08.959316 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.012956 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.042405 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.102237 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.159097 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.182200 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.210094 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.236778 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.249677 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.367824 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.368139 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d48c2f23933eee25cf2ff29cc35f69058ebf03351a6f31e6f46f1b00aef377c5"} Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.399957 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.429235 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.486827 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.518459 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.585563 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.621567 4729 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.689288 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.769778 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.914352 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 06:52:09 crc kubenswrapper[4729]: I0127 06:52:09.991543 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.012747 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.080225 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.165228 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.281020 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.286387 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.290909 4729 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.389212 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.460730 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.465501 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.513134 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.540120 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.676808 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.744832 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.778594 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.967599 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7dcb57cccd-bdssl"] Jan 27 06:52:10 crc kubenswrapper[4729]: E0127 06:52:10.967958 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" containerName="installer" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.967978 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" containerName="installer" Jan 27 06:52:10 crc kubenswrapper[4729]: E0127 06:52:10.968004 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" containerName="oauth-openshift" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.968016 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" containerName="oauth-openshift" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.968207 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f1651f-1170-4455-bbb3-bcf62eb786b7" containerName="installer" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.968474 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c9819c6-6d83-4ef1-94bd-038e573864d9" containerName="oauth-openshift" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.969166 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.971101 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.975688 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.975801 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.975858 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.975963 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.976079 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.976212 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.976288 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.976398 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.977911 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.979019 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.979193 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980113 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1ca20494-eeee-414e-a301-4dc8ac787f4a-audit-dir\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980175 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jftcx\" (UniqueName: \"kubernetes.io/projected/1ca20494-eeee-414e-a301-4dc8ac787f4a-kube-api-access-jftcx\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980225 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980298 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980365 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-session\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980403 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980438 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-audit-policies\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980468 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980577 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-template-error\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980673 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980713 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-template-login\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980753 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980817 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.980925 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.987411 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7dcb57cccd-bdssl"] Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.989075 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.989575 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.994114 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 06:52:10 crc kubenswrapper[4729]: I0127 06:52:10.999025 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.073212 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081465 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081507 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-template-login\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081531 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081554 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081574 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081594 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1ca20494-eeee-414e-a301-4dc8ac787f4a-audit-dir\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081614 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jftcx\" (UniqueName: \"kubernetes.io/projected/1ca20494-eeee-414e-a301-4dc8ac787f4a-kube-api-access-jftcx\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081638 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081665 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081705 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-session\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081728 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081750 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081771 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-audit-policies\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.081808 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-template-error\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.082864 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1ca20494-eeee-414e-a301-4dc8ac787f4a-audit-dir\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.083917 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.084016 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.084301 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-audit-policies\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.084988 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.088422 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.088652 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-template-login\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.088830 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.089801 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-template-error\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.090151 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-session\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.090859 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.094845 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.097707 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1ca20494-eeee-414e-a301-4dc8ac787f4a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.098431 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.100812 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jftcx\" (UniqueName: \"kubernetes.io/projected/1ca20494-eeee-414e-a301-4dc8ac787f4a-kube-api-access-jftcx\") pod \"oauth-openshift-7dcb57cccd-bdssl\" (UID: \"1ca20494-eeee-414e-a301-4dc8ac787f4a\") " pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.281360 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.293199 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.309901 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.323585 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.340097 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.439417 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.446919 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.486926 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.556355 4729 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.567132 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.572933 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.585405 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.698998 4729 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.699439 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1" gracePeriod=5 Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.700616 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.748454 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.886621 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 06:52:11 crc kubenswrapper[4729]: I0127 06:52:11.948893 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.041627 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.179751 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.249193 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.327511 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.385784 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.399958 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.404397 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.424800 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.428613 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.572722 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.827269 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 06:52:12 crc kubenswrapper[4729]: I0127 06:52:12.953796 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 06:52:13 crc kubenswrapper[4729]: I0127 06:52:13.143326 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 06:52:13 crc kubenswrapper[4729]: I0127 06:52:13.240389 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 06:52:13 crc kubenswrapper[4729]: I0127 06:52:13.300660 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 06:52:13 crc kubenswrapper[4729]: I0127 06:52:13.447816 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 06:52:13 crc kubenswrapper[4729]: I0127 06:52:13.628506 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 06:52:13 crc kubenswrapper[4729]: I0127 06:52:13.855303 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 06:52:13 crc kubenswrapper[4729]: I0127 06:52:13.911142 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 06:52:13 crc kubenswrapper[4729]: I0127 06:52:13.979771 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 06:52:14 crc kubenswrapper[4729]: I0127 06:52:14.106029 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 06:52:14 crc kubenswrapper[4729]: I0127 06:52:14.121918 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 06:52:14 crc kubenswrapper[4729]: I0127 06:52:14.209840 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 06:52:14 crc kubenswrapper[4729]: I0127 06:52:14.276825 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7dcb57cccd-bdssl"] Jan 27 06:52:14 crc kubenswrapper[4729]: I0127 06:52:14.389438 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" event={"ID":"1ca20494-eeee-414e-a301-4dc8ac787f4a","Type":"ContainerStarted","Data":"8873910e8b39129c96624fbc2ef61fd99a477ed9e373719e46ad11988e77c14a"} Jan 27 06:52:15 crc kubenswrapper[4729]: I0127 06:52:15.036461 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 06:52:15 crc kubenswrapper[4729]: I0127 06:52:15.199765 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 06:52:15 crc kubenswrapper[4729]: I0127 06:52:15.281055 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 06:52:15 crc kubenswrapper[4729]: I0127 06:52:15.396354 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" event={"ID":"1ca20494-eeee-414e-a301-4dc8ac787f4a","Type":"ContainerStarted","Data":"df08b5361c7060ecc0aa8bf4d0c6e91daba5132e7510537f5f7484704f3a2410"} Jan 27 06:52:15 crc kubenswrapper[4729]: I0127 06:52:15.396575 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:15 crc kubenswrapper[4729]: I0127 06:52:15.402031 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" Jan 27 06:52:15 crc kubenswrapper[4729]: I0127 06:52:15.421792 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7dcb57cccd-bdssl" podStartSLOduration=70.421771912 podStartE2EDuration="1m10.421771912s" podCreationTimestamp="2026-01-27 06:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:52:15.419825759 +0000 UTC m=+300.486947032" watchObservedRunningTime="2026-01-27 06:52:15.421771912 +0000 UTC m=+300.488893175" Jan 27 06:52:15 crc kubenswrapper[4729]: I0127 06:52:15.938403 4729 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 06:52:15 crc kubenswrapper[4729]: I0127 06:52:15.967377 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.361019 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.361310 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.371495 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.371559 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.371613 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.371642 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.371690 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.372202 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.372226 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.372269 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.372217 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.384334 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.407960 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.408011 4729 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1" exitCode=137 Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.408137 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.408285 4729 scope.go:117] "RemoveContainer" containerID="21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.430583 4729 scope.go:117] "RemoveContainer" containerID="21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1" Jan 27 06:52:17 crc kubenswrapper[4729]: E0127 06:52:17.431146 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1\": container with ID starting with 21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1 not found: ID does not exist" containerID="21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.431179 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1"} err="failed to get container status \"21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1\": rpc error: code = NotFound desc = could not find container \"21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1\": container with ID starting with 21a25e80e67ff95aed80d033e23d3c94cecb7a047da254a4c771e9b3fb09bee1 not found: ID does not exist" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.473006 4729 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.473041 4729 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.473051 4729 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.473062 4729 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:17 crc kubenswrapper[4729]: I0127 06:52:17.473095 4729 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:18 crc kubenswrapper[4729]: I0127 06:52:18.374062 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 06:52:18 crc kubenswrapper[4729]: I0127 06:52:18.375534 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 06:52:18 crc kubenswrapper[4729]: I0127 06:52:18.388817 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:52:18 crc kubenswrapper[4729]: I0127 06:52:18.388858 4729 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c43e63b3-68f2-40aa-8367-3909c3466df3" Jan 27 06:52:18 crc kubenswrapper[4729]: I0127 06:52:18.391835 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:52:18 crc kubenswrapper[4729]: I0127 06:52:18.391855 4729 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c43e63b3-68f2-40aa-8367-3909c3466df3" Jan 27 06:52:29 crc kubenswrapper[4729]: I0127 06:52:29.506716 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:52:38 crc kubenswrapper[4729]: I0127 06:52:38.580606 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 06:52:38 crc kubenswrapper[4729]: I0127 06:52:38.584684 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 06:52:38 crc kubenswrapper[4729]: I0127 06:52:38.584751 4729 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="610f449015358452a540ca9c92975d520b1740697f220bf76c118c7efa06faf3" exitCode=137 Jan 27 06:52:38 crc kubenswrapper[4729]: I0127 06:52:38.584784 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"610f449015358452a540ca9c92975d520b1740697f220bf76c118c7efa06faf3"} Jan 27 06:52:38 crc kubenswrapper[4729]: I0127 06:52:38.584814 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4d3ddd93c55fd245a1d41dd5a151bc3a7585519edf7c38d73982b9cafdcf25ad"} Jan 27 06:52:38 crc kubenswrapper[4729]: I0127 06:52:38.584834 4729 scope.go:117] "RemoveContainer" containerID="ae4f6dc2d34e5880b2be246fb98c75bdb5d9010ec517c9953cb9a6ddd69435d2" Jan 27 06:52:39 crc kubenswrapper[4729]: I0127 06:52:39.593817 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 06:52:47 crc kubenswrapper[4729]: I0127 06:52:47.587130 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:52:47 crc kubenswrapper[4729]: I0127 06:52:47.595579 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:52:47 crc kubenswrapper[4729]: I0127 06:52:47.642018 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:52:48 crc kubenswrapper[4729]: I0127 06:52:48.653687 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:01 crc kubenswrapper[4729]: I0127 06:53:01.087008 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:53:01 crc kubenswrapper[4729]: I0127 06:53:01.087874 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.350401 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zxshj"] Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.351016 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zxshj" podUID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerName="registry-server" containerID="cri-o://a2904954bd05c8c2ce0c06477799a1f8fe916a60190a3681b40d64f69d18db62" gracePeriod=30 Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.373016 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-978h9"] Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.373271 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-978h9" podUID="633788b3-11e3-447b-91cf-52a9563c052a" containerName="registry-server" containerID="cri-o://84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457" gracePeriod=30 Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.378799 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9l2jg"] Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.379007 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" podUID="d3ce67a6-c46d-4334-b408-48753b87ea93" containerName="marketplace-operator" containerID="cri-o://fa9eaaa3074cc501cd2c8dceeb03bb19b938d45ef3cf9676d2d0f10acb9bdc79" gracePeriod=30 Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.391020 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-krr4s"] Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.391305 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-krr4s" podUID="83578f10-10b1-4953-902d-cf066f164ffe" containerName="registry-server" containerID="cri-o://b2829bfb59ef213051fadb00f5cc07119a4261c781bc19256eb14344d97b9488" gracePeriod=30 Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.394825 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gv9zq"] Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.395126 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gv9zq" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerName="registry-server" containerID="cri-o://4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a" gracePeriod=30 Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.401170 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l5hnb"] Jan 27 06:53:14 crc kubenswrapper[4729]: E0127 06:53:14.401414 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.401426 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.401519 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.401872 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.415428 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l5hnb"] Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.514391 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbk8\" (UniqueName: \"kubernetes.io/projected/6c8ac9fd-a774-447b-a382-e09ff41f678b-kube-api-access-bfbk8\") pod \"marketplace-operator-79b997595-l5hnb\" (UID: \"6c8ac9fd-a774-447b-a382-e09ff41f678b\") " pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.514911 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c8ac9fd-a774-447b-a382-e09ff41f678b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l5hnb\" (UID: \"6c8ac9fd-a774-447b-a382-e09ff41f678b\") " pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.514987 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c8ac9fd-a774-447b-a382-e09ff41f678b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l5hnb\" (UID: \"6c8ac9fd-a774-447b-a382-e09ff41f678b\") " pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.615933 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c8ac9fd-a774-447b-a382-e09ff41f678b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l5hnb\" (UID: \"6c8ac9fd-a774-447b-a382-e09ff41f678b\") " pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.616007 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbk8\" (UniqueName: \"kubernetes.io/projected/6c8ac9fd-a774-447b-a382-e09ff41f678b-kube-api-access-bfbk8\") pod \"marketplace-operator-79b997595-l5hnb\" (UID: \"6c8ac9fd-a774-447b-a382-e09ff41f678b\") " pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.616034 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c8ac9fd-a774-447b-a382-e09ff41f678b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l5hnb\" (UID: \"6c8ac9fd-a774-447b-a382-e09ff41f678b\") " pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.617781 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c8ac9fd-a774-447b-a382-e09ff41f678b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l5hnb\" (UID: \"6c8ac9fd-a774-447b-a382-e09ff41f678b\") " pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.621691 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c8ac9fd-a774-447b-a382-e09ff41f678b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l5hnb\" (UID: \"6c8ac9fd-a774-447b-a382-e09ff41f678b\") " pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.635825 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbk8\" (UniqueName: \"kubernetes.io/projected/6c8ac9fd-a774-447b-a382-e09ff41f678b-kube-api-access-bfbk8\") pod \"marketplace-operator-79b997595-l5hnb\" (UID: \"6c8ac9fd-a774-447b-a382-e09ff41f678b\") " pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.713683 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-978h9" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.726883 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:14 crc kubenswrapper[4729]: E0127 06:53:14.748591 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a is running failed: container process not found" containerID="4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 06:53:14 crc kubenswrapper[4729]: E0127 06:53:14.749054 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a is running failed: container process not found" containerID="4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 06:53:14 crc kubenswrapper[4729]: E0127 06:53:14.749382 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a is running failed: container process not found" containerID="4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 06:53:14 crc kubenswrapper[4729]: E0127 06:53:14.749409 4729 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-gv9zq" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerName="registry-server" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.814656 4729 generic.go:334] "Generic (PLEG): container finished" podID="633788b3-11e3-447b-91cf-52a9563c052a" containerID="84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457" exitCode=0 Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.814701 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-978h9" event={"ID":"633788b3-11e3-447b-91cf-52a9563c052a","Type":"ContainerDied","Data":"84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457"} Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.814725 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-978h9" event={"ID":"633788b3-11e3-447b-91cf-52a9563c052a","Type":"ContainerDied","Data":"f39589c9d76a9c32ddf0359af6f2b30f3980c606cd46f67336bdfb5b9ab6344b"} Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.814741 4729 scope.go:117] "RemoveContainer" containerID="84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.814831 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-978h9" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.817249 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-catalog-content\") pod \"633788b3-11e3-447b-91cf-52a9563c052a\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.817280 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzddq\" (UniqueName: \"kubernetes.io/projected/633788b3-11e3-447b-91cf-52a9563c052a-kube-api-access-gzddq\") pod \"633788b3-11e3-447b-91cf-52a9563c052a\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.817313 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-utilities\") pod \"633788b3-11e3-447b-91cf-52a9563c052a\" (UID: \"633788b3-11e3-447b-91cf-52a9563c052a\") " Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.818331 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-utilities" (OuterVolumeSpecName: "utilities") pod "633788b3-11e3-447b-91cf-52a9563c052a" (UID: "633788b3-11e3-447b-91cf-52a9563c052a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.822739 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633788b3-11e3-447b-91cf-52a9563c052a-kube-api-access-gzddq" (OuterVolumeSpecName: "kube-api-access-gzddq") pod "633788b3-11e3-447b-91cf-52a9563c052a" (UID: "633788b3-11e3-447b-91cf-52a9563c052a"). InnerVolumeSpecName "kube-api-access-gzddq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.832958 4729 generic.go:334] "Generic (PLEG): container finished" podID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerID="4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a" exitCode=0 Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.833034 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9zq" event={"ID":"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1","Type":"ContainerDied","Data":"4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a"} Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.834600 4729 generic.go:334] "Generic (PLEG): container finished" podID="83578f10-10b1-4953-902d-cf066f164ffe" containerID="b2829bfb59ef213051fadb00f5cc07119a4261c781bc19256eb14344d97b9488" exitCode=0 Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.834653 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krr4s" event={"ID":"83578f10-10b1-4953-902d-cf066f164ffe","Type":"ContainerDied","Data":"b2829bfb59ef213051fadb00f5cc07119a4261c781bc19256eb14344d97b9488"} Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.842942 4729 generic.go:334] "Generic (PLEG): container finished" podID="d3ce67a6-c46d-4334-b408-48753b87ea93" containerID="fa9eaaa3074cc501cd2c8dceeb03bb19b938d45ef3cf9676d2d0f10acb9bdc79" exitCode=0 Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.843007 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" event={"ID":"d3ce67a6-c46d-4334-b408-48753b87ea93","Type":"ContainerDied","Data":"fa9eaaa3074cc501cd2c8dceeb03bb19b938d45ef3cf9676d2d0f10acb9bdc79"} Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.844616 4729 generic.go:334] "Generic (PLEG): container finished" podID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerID="a2904954bd05c8c2ce0c06477799a1f8fe916a60190a3681b40d64f69d18db62" exitCode=0 Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.844635 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxshj" event={"ID":"bf5b5ac2-9195-41a1-bb76-9017cf05397b","Type":"ContainerDied","Data":"a2904954bd05c8c2ce0c06477799a1f8fe916a60190a3681b40d64f69d18db62"} Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.858355 4729 scope.go:117] "RemoveContainer" containerID="ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.883785 4729 scope.go:117] "RemoveContainer" containerID="e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.900163 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.911270 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "633788b3-11e3-447b-91cf-52a9563c052a" (UID: "633788b3-11e3-447b-91cf-52a9563c052a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.917709 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.922246 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.922265 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzddq\" (UniqueName: \"kubernetes.io/projected/633788b3-11e3-447b-91cf-52a9563c052a-kube-api-access-gzddq\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.922275 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633788b3-11e3-447b-91cf-52a9563c052a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.923426 4729 scope.go:117] "RemoveContainer" containerID="84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457" Jan 27 06:53:14 crc kubenswrapper[4729]: E0127 06:53:14.925378 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457\": container with ID starting with 84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457 not found: ID does not exist" containerID="84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.925434 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457"} err="failed to get container status \"84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457\": rpc error: code = NotFound desc = could not find container \"84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457\": container with ID starting with 84242e70a9d9200823358eb2c6d57b900787ff38bd1b02a192b15adea06d0457 not found: ID does not exist" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.925467 4729 scope.go:117] "RemoveContainer" containerID="ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334" Jan 27 06:53:14 crc kubenswrapper[4729]: E0127 06:53:14.926282 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334\": container with ID starting with ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334 not found: ID does not exist" containerID="ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.926308 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334"} err="failed to get container status \"ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334\": rpc error: code = NotFound desc = could not find container \"ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334\": container with ID starting with ea4f2f9fdf25aa55f908aa9a3642d7dd69cc9655e7aa37d92bfcf76daab29334 not found: ID does not exist" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.926322 4729 scope.go:117] "RemoveContainer" containerID="e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920" Jan 27 06:53:14 crc kubenswrapper[4729]: E0127 06:53:14.926480 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920\": container with ID starting with e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920 not found: ID does not exist" containerID="e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.926495 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920"} err="failed to get container status \"e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920\": rpc error: code = NotFound desc = could not find container \"e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920\": container with ID starting with e344e9af107a1fff41d28c9b3fa2a69e3a5a2cc933328d1fb45567518acfc920 not found: ID does not exist" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.928184 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:53:14 crc kubenswrapper[4729]: I0127 06:53:14.934302 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022648 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-operator-metrics\") pod \"d3ce67a6-c46d-4334-b408-48753b87ea93\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022691 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-utilities\") pod \"83578f10-10b1-4953-902d-cf066f164ffe\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022743 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhfg5\" (UniqueName: \"kubernetes.io/projected/83578f10-10b1-4953-902d-cf066f164ffe-kube-api-access-dhfg5\") pod \"83578f10-10b1-4953-902d-cf066f164ffe\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022770 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-catalog-content\") pod \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022835 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9pd5\" (UniqueName: \"kubernetes.io/projected/d3ce67a6-c46d-4334-b408-48753b87ea93-kube-api-access-h9pd5\") pod \"d3ce67a6-c46d-4334-b408-48753b87ea93\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022852 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-utilities\") pod \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022879 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-utilities\") pod \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022894 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-catalog-content\") pod \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022910 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-trusted-ca\") pod \"d3ce67a6-c46d-4334-b408-48753b87ea93\" (UID: \"d3ce67a6-c46d-4334-b408-48753b87ea93\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022927 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6zvs\" (UniqueName: \"kubernetes.io/projected/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-kube-api-access-p6zvs\") pod \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\" (UID: \"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022950 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-catalog-content\") pod \"83578f10-10b1-4953-902d-cf066f164ffe\" (UID: \"83578f10-10b1-4953-902d-cf066f164ffe\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.022968 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9fck\" (UniqueName: \"kubernetes.io/projected/bf5b5ac2-9195-41a1-bb76-9017cf05397b-kube-api-access-r9fck\") pod \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\" (UID: \"bf5b5ac2-9195-41a1-bb76-9017cf05397b\") " Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.023726 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-utilities" (OuterVolumeSpecName: "utilities") pod "83578f10-10b1-4953-902d-cf066f164ffe" (UID: "83578f10-10b1-4953-902d-cf066f164ffe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.024606 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-utilities" (OuterVolumeSpecName: "utilities") pod "bf5b5ac2-9195-41a1-bb76-9017cf05397b" (UID: "bf5b5ac2-9195-41a1-bb76-9017cf05397b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.024993 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d3ce67a6-c46d-4334-b408-48753b87ea93" (UID: "d3ce67a6-c46d-4334-b408-48753b87ea93"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.025445 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-utilities" (OuterVolumeSpecName: "utilities") pod "f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" (UID: "f80985e9-d009-4ceb-bd8e-535ef0e0a9e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.028053 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5b5ac2-9195-41a1-bb76-9017cf05397b-kube-api-access-r9fck" (OuterVolumeSpecName: "kube-api-access-r9fck") pod "bf5b5ac2-9195-41a1-bb76-9017cf05397b" (UID: "bf5b5ac2-9195-41a1-bb76-9017cf05397b"). InnerVolumeSpecName "kube-api-access-r9fck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.030575 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-kube-api-access-p6zvs" (OuterVolumeSpecName: "kube-api-access-p6zvs") pod "f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" (UID: "f80985e9-d009-4ceb-bd8e-535ef0e0a9e1"). InnerVolumeSpecName "kube-api-access-p6zvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.031008 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ce67a6-c46d-4334-b408-48753b87ea93-kube-api-access-h9pd5" (OuterVolumeSpecName: "kube-api-access-h9pd5") pod "d3ce67a6-c46d-4334-b408-48753b87ea93" (UID: "d3ce67a6-c46d-4334-b408-48753b87ea93"). InnerVolumeSpecName "kube-api-access-h9pd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.032854 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83578f10-10b1-4953-902d-cf066f164ffe-kube-api-access-dhfg5" (OuterVolumeSpecName: "kube-api-access-dhfg5") pod "83578f10-10b1-4953-902d-cf066f164ffe" (UID: "83578f10-10b1-4953-902d-cf066f164ffe"). InnerVolumeSpecName "kube-api-access-dhfg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.033016 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d3ce67a6-c46d-4334-b408-48753b87ea93" (UID: "d3ce67a6-c46d-4334-b408-48753b87ea93"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.055563 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83578f10-10b1-4953-902d-cf066f164ffe" (UID: "83578f10-10b1-4953-902d-cf066f164ffe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.075052 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf5b5ac2-9195-41a1-bb76-9017cf05397b" (UID: "bf5b5ac2-9195-41a1-bb76-9017cf05397b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124487 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhfg5\" (UniqueName: \"kubernetes.io/projected/83578f10-10b1-4953-902d-cf066f164ffe-kube-api-access-dhfg5\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124509 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124518 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124527 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9pd5\" (UniqueName: \"kubernetes.io/projected/d3ce67a6-c46d-4334-b408-48753b87ea93-kube-api-access-h9pd5\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124536 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5b5ac2-9195-41a1-bb76-9017cf05397b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124545 4729 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124553 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6zvs\" (UniqueName: \"kubernetes.io/projected/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-kube-api-access-p6zvs\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124561 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124569 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9fck\" (UniqueName: \"kubernetes.io/projected/bf5b5ac2-9195-41a1-bb76-9017cf05397b-kube-api-access-r9fck\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124578 4729 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3ce67a6-c46d-4334-b408-48753b87ea93-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.124585 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83578f10-10b1-4953-902d-cf066f164ffe-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.148120 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-978h9"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.149836 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-978h9"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.176380 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" (UID: "f80985e9-d009-4ceb-bd8e-535ef0e0a9e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.225913 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.316848 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l5hnb"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.850334 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9zq" event={"ID":"f80985e9-d009-4ceb-bd8e-535ef0e0a9e1","Type":"ContainerDied","Data":"15e4b6957383ce5d182305f6bc940c89ea3c95f75c523ab1df3ff65b164ef2bd"} Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.850381 4729 scope.go:117] "RemoveContainer" containerID="4286d6aa78734b5bed70612b2b12d72a5935a8f78072cf3847d2744ec678cb1a" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.850456 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv9zq" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.857441 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krr4s" event={"ID":"83578f10-10b1-4953-902d-cf066f164ffe","Type":"ContainerDied","Data":"df0b77d99905d465197b14eac023f3cdee730bbb43f983ca141e1baab5aa312a"} Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.857483 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krr4s" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.858751 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" event={"ID":"6c8ac9fd-a774-447b-a382-e09ff41f678b","Type":"ContainerStarted","Data":"7929baaced5c605f8aeab1434ff7402b89791f83c1124f182344f367a9cf0e93"} Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.858789 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" event={"ID":"6c8ac9fd-a774-447b-a382-e09ff41f678b","Type":"ContainerStarted","Data":"f79b1ae66e2189116ba92c1ba114db998777fd94b00d49b03c051b4ae602ed33"} Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.859433 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.863494 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.863494 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9l2jg" event={"ID":"d3ce67a6-c46d-4334-b408-48753b87ea93","Type":"ContainerDied","Data":"ff20b02bea4c54cc9e6a354ad19045a0d02bd0a16376bd77e1c7ee0b45f8a223"} Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.868132 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxshj" event={"ID":"bf5b5ac2-9195-41a1-bb76-9017cf05397b","Type":"ContainerDied","Data":"d4cbc3558c1c6e54d27ff134062bd1cda66333f37361c90b22a846aa459961be"} Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.868219 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxshj" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.871620 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.871663 4729 scope.go:117] "RemoveContainer" containerID="bbf06a1c35f1a978c5676d284c6c4249eafe7e5f648b1b301262c4fd74df437b" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.897330 4729 scope.go:117] "RemoveContainer" containerID="4f3d68713bab7066eb751546cfc64759b5fecb14c62e315a281846ed03354813" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.908883 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-l5hnb" podStartSLOduration=1.9088667830000001 podStartE2EDuration="1.908866783s" podCreationTimestamp="2026-01-27 06:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:53:15.892881325 +0000 UTC m=+360.960002588" watchObservedRunningTime="2026-01-27 06:53:15.908866783 +0000 UTC m=+360.975988046" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.911496 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9l2jg"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.914304 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9l2jg"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.925598 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-krr4s"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.928467 4729 scope.go:117] "RemoveContainer" containerID="b2829bfb59ef213051fadb00f5cc07119a4261c781bc19256eb14344d97b9488" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.937899 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-krr4s"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.945059 4729 scope.go:117] "RemoveContainer" containerID="560cf7bf6ed0eb41e4345bf07a9bc51dfcaa11038daaa6d98c117c8d88699b17" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.952752 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gv9zq"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.957638 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gv9zq"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.962712 4729 scope.go:117] "RemoveContainer" containerID="b730cfb804bc4e316c6e5eca6ef9b8f22e8ad50592955dd0cf2f84927aceeba1" Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.977378 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zxshj"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.982700 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zxshj"] Jan 27 06:53:15 crc kubenswrapper[4729]: I0127 06:53:15.999494 4729 scope.go:117] "RemoveContainer" containerID="fa9eaaa3074cc501cd2c8dceeb03bb19b938d45ef3cf9676d2d0f10acb9bdc79" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.013048 4729 scope.go:117] "RemoveContainer" containerID="a2904954bd05c8c2ce0c06477799a1f8fe916a60190a3681b40d64f69d18db62" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.029412 4729 scope.go:117] "RemoveContainer" containerID="4112be9c12f4fcc19378915c0f017e9219089d84713cd66048520684424f7c8c" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.050906 4729 scope.go:117] "RemoveContainer" containerID="8d3c03b5c905f7999107cf9f39a906b69472ca1ee1b3b4ec57fd117fb45388b9" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.174639 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgwph"] Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175002 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633788b3-11e3-447b-91cf-52a9563c052a" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175012 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="633788b3-11e3-447b-91cf-52a9563c052a" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175022 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerName="extract-utilities" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175029 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerName="extract-utilities" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175040 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633788b3-11e3-447b-91cf-52a9563c052a" containerName="extract-content" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175047 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="633788b3-11e3-447b-91cf-52a9563c052a" containerName="extract-content" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175058 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175079 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175087 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce67a6-c46d-4334-b408-48753b87ea93" containerName="marketplace-operator" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175093 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce67a6-c46d-4334-b408-48753b87ea93" containerName="marketplace-operator" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175103 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83578f10-10b1-4953-902d-cf066f164ffe" containerName="extract-content" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175108 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="83578f10-10b1-4953-902d-cf066f164ffe" containerName="extract-content" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175116 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83578f10-10b1-4953-902d-cf066f164ffe" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175123 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="83578f10-10b1-4953-902d-cf066f164ffe" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175130 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633788b3-11e3-447b-91cf-52a9563c052a" containerName="extract-utilities" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175137 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="633788b3-11e3-447b-91cf-52a9563c052a" containerName="extract-utilities" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175146 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83578f10-10b1-4953-902d-cf066f164ffe" containerName="extract-utilities" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175151 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="83578f10-10b1-4953-902d-cf066f164ffe" containerName="extract-utilities" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175158 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerName="extract-content" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175164 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerName="extract-content" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175173 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerName="extract-utilities" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175180 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerName="extract-utilities" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175194 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175200 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: E0127 06:53:16.175206 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerName="extract-content" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175212 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerName="extract-content" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175301 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="83578f10-10b1-4953-902d-cf066f164ffe" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175311 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="633788b3-11e3-447b-91cf-52a9563c052a" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175322 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175330 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" containerName="registry-server" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175340 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce67a6-c46d-4334-b408-48753b87ea93" containerName="marketplace-operator" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.175964 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.177969 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.192182 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgwph"] Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.342349 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d748891f-d6ea-4e31-8e19-c41fe08949ab-catalog-content\") pod \"community-operators-qgwph\" (UID: \"d748891f-d6ea-4e31-8e19-c41fe08949ab\") " pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.342394 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zqm\" (UniqueName: \"kubernetes.io/projected/d748891f-d6ea-4e31-8e19-c41fe08949ab-kube-api-access-p2zqm\") pod \"community-operators-qgwph\" (UID: \"d748891f-d6ea-4e31-8e19-c41fe08949ab\") " pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.342423 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d748891f-d6ea-4e31-8e19-c41fe08949ab-utilities\") pod \"community-operators-qgwph\" (UID: \"d748891f-d6ea-4e31-8e19-c41fe08949ab\") " pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.368091 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633788b3-11e3-447b-91cf-52a9563c052a" path="/var/lib/kubelet/pods/633788b3-11e3-447b-91cf-52a9563c052a/volumes" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.368671 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83578f10-10b1-4953-902d-cf066f164ffe" path="/var/lib/kubelet/pods/83578f10-10b1-4953-902d-cf066f164ffe/volumes" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.369221 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5b5ac2-9195-41a1-bb76-9017cf05397b" path="/var/lib/kubelet/pods/bf5b5ac2-9195-41a1-bb76-9017cf05397b/volumes" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.370239 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ce67a6-c46d-4334-b408-48753b87ea93" path="/var/lib/kubelet/pods/d3ce67a6-c46d-4334-b408-48753b87ea93/volumes" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.370654 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80985e9-d009-4ceb-bd8e-535ef0e0a9e1" path="/var/lib/kubelet/pods/f80985e9-d009-4ceb-bd8e-535ef0e0a9e1/volumes" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.443085 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2zqm\" (UniqueName: \"kubernetes.io/projected/d748891f-d6ea-4e31-8e19-c41fe08949ab-kube-api-access-p2zqm\") pod \"community-operators-qgwph\" (UID: \"d748891f-d6ea-4e31-8e19-c41fe08949ab\") " pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.443131 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d748891f-d6ea-4e31-8e19-c41fe08949ab-utilities\") pod \"community-operators-qgwph\" (UID: \"d748891f-d6ea-4e31-8e19-c41fe08949ab\") " pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.443192 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d748891f-d6ea-4e31-8e19-c41fe08949ab-catalog-content\") pod \"community-operators-qgwph\" (UID: \"d748891f-d6ea-4e31-8e19-c41fe08949ab\") " pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.443589 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d748891f-d6ea-4e31-8e19-c41fe08949ab-catalog-content\") pod \"community-operators-qgwph\" (UID: \"d748891f-d6ea-4e31-8e19-c41fe08949ab\") " pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.443713 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d748891f-d6ea-4e31-8e19-c41fe08949ab-utilities\") pod \"community-operators-qgwph\" (UID: \"d748891f-d6ea-4e31-8e19-c41fe08949ab\") " pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.475225 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2zqm\" (UniqueName: \"kubernetes.io/projected/d748891f-d6ea-4e31-8e19-c41fe08949ab-kube-api-access-p2zqm\") pod \"community-operators-qgwph\" (UID: \"d748891f-d6ea-4e31-8e19-c41fe08949ab\") " pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.489649 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.498433 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.777170 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2wb4g"] Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.778704 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.783425 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.787352 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wb4g"] Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.888594 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgwph"] Jan 27 06:53:16 crc kubenswrapper[4729]: W0127 06:53:16.898700 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd748891f_d6ea_4e31_8e19_c41fe08949ab.slice/crio-0ed37304688dfbb9ab2399ed6829514b8436c78f22a6fe490cea1acf68fd727d WatchSource:0}: Error finding container 0ed37304688dfbb9ab2399ed6829514b8436c78f22a6fe490cea1acf68fd727d: Status 404 returned error can't find the container with id 0ed37304688dfbb9ab2399ed6829514b8436c78f22a6fe490cea1acf68fd727d Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.950446 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d40a19-dcbc-4606-b3f6-cda045b22c35-utilities\") pod \"redhat-marketplace-2wb4g\" (UID: \"46d40a19-dcbc-4606-b3f6-cda045b22c35\") " pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.950479 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d40a19-dcbc-4606-b3f6-cda045b22c35-catalog-content\") pod \"redhat-marketplace-2wb4g\" (UID: \"46d40a19-dcbc-4606-b3f6-cda045b22c35\") " pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:16 crc kubenswrapper[4729]: I0127 06:53:16.950502 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbvqp\" (UniqueName: \"kubernetes.io/projected/46d40a19-dcbc-4606-b3f6-cda045b22c35-kube-api-access-fbvqp\") pod \"redhat-marketplace-2wb4g\" (UID: \"46d40a19-dcbc-4606-b3f6-cda045b22c35\") " pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.052716 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d40a19-dcbc-4606-b3f6-cda045b22c35-utilities\") pod \"redhat-marketplace-2wb4g\" (UID: \"46d40a19-dcbc-4606-b3f6-cda045b22c35\") " pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.052763 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d40a19-dcbc-4606-b3f6-cda045b22c35-catalog-content\") pod \"redhat-marketplace-2wb4g\" (UID: \"46d40a19-dcbc-4606-b3f6-cda045b22c35\") " pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.052794 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbvqp\" (UniqueName: \"kubernetes.io/projected/46d40a19-dcbc-4606-b3f6-cda045b22c35-kube-api-access-fbvqp\") pod \"redhat-marketplace-2wb4g\" (UID: \"46d40a19-dcbc-4606-b3f6-cda045b22c35\") " pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.053480 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d40a19-dcbc-4606-b3f6-cda045b22c35-catalog-content\") pod \"redhat-marketplace-2wb4g\" (UID: \"46d40a19-dcbc-4606-b3f6-cda045b22c35\") " pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.053658 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d40a19-dcbc-4606-b3f6-cda045b22c35-utilities\") pod \"redhat-marketplace-2wb4g\" (UID: \"46d40a19-dcbc-4606-b3f6-cda045b22c35\") " pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.071915 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbvqp\" (UniqueName: \"kubernetes.io/projected/46d40a19-dcbc-4606-b3f6-cda045b22c35-kube-api-access-fbvqp\") pod \"redhat-marketplace-2wb4g\" (UID: \"46d40a19-dcbc-4606-b3f6-cda045b22c35\") " pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.102299 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.478742 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wb4g"] Jan 27 06:53:17 crc kubenswrapper[4729]: W0127 06:53:17.490947 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46d40a19_dcbc_4606_b3f6_cda045b22c35.slice/crio-2c820261af82fc1cfb6d36f500192c88650b28156e0c10002d4dde3148dfc04f WatchSource:0}: Error finding container 2c820261af82fc1cfb6d36f500192c88650b28156e0c10002d4dde3148dfc04f: Status 404 returned error can't find the container with id 2c820261af82fc1cfb6d36f500192c88650b28156e0c10002d4dde3148dfc04f Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.883136 4729 generic.go:334] "Generic (PLEG): container finished" podID="d748891f-d6ea-4e31-8e19-c41fe08949ab" containerID="8e976a42c6a1076525fa48e46bc12cbed1d47497586fd248fa6c799af9872e17" exitCode=0 Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.883208 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgwph" event={"ID":"d748891f-d6ea-4e31-8e19-c41fe08949ab","Type":"ContainerDied","Data":"8e976a42c6a1076525fa48e46bc12cbed1d47497586fd248fa6c799af9872e17"} Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.883268 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgwph" event={"ID":"d748891f-d6ea-4e31-8e19-c41fe08949ab","Type":"ContainerStarted","Data":"0ed37304688dfbb9ab2399ed6829514b8436c78f22a6fe490cea1acf68fd727d"} Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.885980 4729 generic.go:334] "Generic (PLEG): container finished" podID="46d40a19-dcbc-4606-b3f6-cda045b22c35" containerID="055f7b495c69089dccbebb966a4db172c0fa013dae7ef2e1eb895cf3e527e045" exitCode=0 Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.886127 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wb4g" event={"ID":"46d40a19-dcbc-4606-b3f6-cda045b22c35","Type":"ContainerDied","Data":"055f7b495c69089dccbebb966a4db172c0fa013dae7ef2e1eb895cf3e527e045"} Jan 27 06:53:17 crc kubenswrapper[4729]: I0127 06:53:17.886165 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wb4g" event={"ID":"46d40a19-dcbc-4606-b3f6-cda045b22c35","Type":"ContainerStarted","Data":"2c820261af82fc1cfb6d36f500192c88650b28156e0c10002d4dde3148dfc04f"} Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.579869 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hcqgr"] Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.581331 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.583452 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.586534 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcqgr"] Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.675648 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgxt2\" (UniqueName: \"kubernetes.io/projected/529eb2a1-6122-4897-90c9-3212a2de14e1-kube-api-access-jgxt2\") pod \"certified-operators-hcqgr\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.675694 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-catalog-content\") pod \"certified-operators-hcqgr\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.675728 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-utilities\") pod \"certified-operators-hcqgr\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.808193 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgxt2\" (UniqueName: \"kubernetes.io/projected/529eb2a1-6122-4897-90c9-3212a2de14e1-kube-api-access-jgxt2\") pod \"certified-operators-hcqgr\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.808625 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-catalog-content\") pod \"certified-operators-hcqgr\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.808734 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-utilities\") pod \"certified-operators-hcqgr\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.809299 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-utilities\") pod \"certified-operators-hcqgr\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.810043 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-catalog-content\") pod \"certified-operators-hcqgr\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.849987 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgxt2\" (UniqueName: \"kubernetes.io/projected/529eb2a1-6122-4897-90c9-3212a2de14e1-kube-api-access-jgxt2\") pod \"certified-operators-hcqgr\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.892349 4729 generic.go:334] "Generic (PLEG): container finished" podID="46d40a19-dcbc-4606-b3f6-cda045b22c35" containerID="2fa2dbe30c439ef8428806ce3ba13db36a7c3fbf66f8611a526933335539c55e" exitCode=0 Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.892433 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wb4g" event={"ID":"46d40a19-dcbc-4606-b3f6-cda045b22c35","Type":"ContainerDied","Data":"2fa2dbe30c439ef8428806ce3ba13db36a7c3fbf66f8611a526933335539c55e"} Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.897006 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:18 crc kubenswrapper[4729]: I0127 06:53:18.914839 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgwph" event={"ID":"d748891f-d6ea-4e31-8e19-c41fe08949ab","Type":"ContainerStarted","Data":"a3790d4dc988e2712719cd55630b8cd0b2bad23661851f5cf4463e1739bfac95"} Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.175739 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pwxfg"] Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.176915 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.181310 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.189109 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwxfg"] Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.296337 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcqgr"] Jan 27 06:53:19 crc kubenswrapper[4729]: W0127 06:53:19.304542 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod529eb2a1_6122_4897_90c9_3212a2de14e1.slice/crio-1956baeadf77ccb26b2ac41ae745ed7efb4c01fc9fbe94814a37347a5da852fe WatchSource:0}: Error finding container 1956baeadf77ccb26b2ac41ae745ed7efb4c01fc9fbe94814a37347a5da852fe: Status 404 returned error can't find the container with id 1956baeadf77ccb26b2ac41ae745ed7efb4c01fc9fbe94814a37347a5da852fe Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.320651 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/967e206f-9b9e-4691-9041-6b91c6732721-catalog-content\") pod \"redhat-operators-pwxfg\" (UID: \"967e206f-9b9e-4691-9041-6b91c6732721\") " pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.320707 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv2q2\" (UniqueName: \"kubernetes.io/projected/967e206f-9b9e-4691-9041-6b91c6732721-kube-api-access-pv2q2\") pod \"redhat-operators-pwxfg\" (UID: \"967e206f-9b9e-4691-9041-6b91c6732721\") " pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.320961 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/967e206f-9b9e-4691-9041-6b91c6732721-utilities\") pod \"redhat-operators-pwxfg\" (UID: \"967e206f-9b9e-4691-9041-6b91c6732721\") " pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.423406 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/967e206f-9b9e-4691-9041-6b91c6732721-utilities\") pod \"redhat-operators-pwxfg\" (UID: \"967e206f-9b9e-4691-9041-6b91c6732721\") " pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.423483 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/967e206f-9b9e-4691-9041-6b91c6732721-catalog-content\") pod \"redhat-operators-pwxfg\" (UID: \"967e206f-9b9e-4691-9041-6b91c6732721\") " pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.423525 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv2q2\" (UniqueName: \"kubernetes.io/projected/967e206f-9b9e-4691-9041-6b91c6732721-kube-api-access-pv2q2\") pod \"redhat-operators-pwxfg\" (UID: \"967e206f-9b9e-4691-9041-6b91c6732721\") " pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.424038 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/967e206f-9b9e-4691-9041-6b91c6732721-catalog-content\") pod \"redhat-operators-pwxfg\" (UID: \"967e206f-9b9e-4691-9041-6b91c6732721\") " pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.424275 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/967e206f-9b9e-4691-9041-6b91c6732721-utilities\") pod \"redhat-operators-pwxfg\" (UID: \"967e206f-9b9e-4691-9041-6b91c6732721\") " pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.441036 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv2q2\" (UniqueName: \"kubernetes.io/projected/967e206f-9b9e-4691-9041-6b91c6732721-kube-api-access-pv2q2\") pod \"redhat-operators-pwxfg\" (UID: \"967e206f-9b9e-4691-9041-6b91c6732721\") " pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.506255 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.925030 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwxfg"] Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.926257 4729 generic.go:334] "Generic (PLEG): container finished" podID="d748891f-d6ea-4e31-8e19-c41fe08949ab" containerID="a3790d4dc988e2712719cd55630b8cd0b2bad23661851f5cf4463e1739bfac95" exitCode=0 Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.926309 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgwph" event={"ID":"d748891f-d6ea-4e31-8e19-c41fe08949ab","Type":"ContainerDied","Data":"a3790d4dc988e2712719cd55630b8cd0b2bad23661851f5cf4463e1739bfac95"} Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.930091 4729 generic.go:334] "Generic (PLEG): container finished" podID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerID="4677de50d89eacb120a03523432103274d5e538c679a07171688f249d4b5bfc2" exitCode=0 Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.930172 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcqgr" event={"ID":"529eb2a1-6122-4897-90c9-3212a2de14e1","Type":"ContainerDied","Data":"4677de50d89eacb120a03523432103274d5e538c679a07171688f249d4b5bfc2"} Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.930229 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcqgr" event={"ID":"529eb2a1-6122-4897-90c9-3212a2de14e1","Type":"ContainerStarted","Data":"1956baeadf77ccb26b2ac41ae745ed7efb4c01fc9fbe94814a37347a5da852fe"} Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.935742 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wb4g" event={"ID":"46d40a19-dcbc-4606-b3f6-cda045b22c35","Type":"ContainerStarted","Data":"904e69e627d10d89c36706517c0c7e5606757eaaefcd0188e8bbd7a850988a49"} Jan 27 06:53:19 crc kubenswrapper[4729]: I0127 06:53:19.994381 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2wb4g" podStartSLOduration=2.577671815 podStartE2EDuration="3.994363856s" podCreationTimestamp="2026-01-27 06:53:16 +0000 UTC" firstStartedPulling="2026-01-27 06:53:17.886951376 +0000 UTC m=+362.954072649" lastFinishedPulling="2026-01-27 06:53:19.303643437 +0000 UTC m=+364.370764690" observedRunningTime="2026-01-27 06:53:19.989754906 +0000 UTC m=+365.056876169" watchObservedRunningTime="2026-01-27 06:53:19.994363856 +0000 UTC m=+365.061485109" Jan 27 06:53:20 crc kubenswrapper[4729]: I0127 06:53:20.943009 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgwph" event={"ID":"d748891f-d6ea-4e31-8e19-c41fe08949ab","Type":"ContainerStarted","Data":"20f17bb3588500e1f448606a2c05db765e8753be9988fca50cd8fba604629a89"} Jan 27 06:53:20 crc kubenswrapper[4729]: I0127 06:53:20.946648 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcqgr" event={"ID":"529eb2a1-6122-4897-90c9-3212a2de14e1","Type":"ContainerDied","Data":"94a5b532ca41b3fe95061fded994f9d783cf95cb2d5775aba83f7abf5f94057c"} Jan 27 06:53:20 crc kubenswrapper[4729]: I0127 06:53:20.946531 4729 generic.go:334] "Generic (PLEG): container finished" podID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerID="94a5b532ca41b3fe95061fded994f9d783cf95cb2d5775aba83f7abf5f94057c" exitCode=0 Jan 27 06:53:20 crc kubenswrapper[4729]: I0127 06:53:20.949127 4729 generic.go:334] "Generic (PLEG): container finished" podID="967e206f-9b9e-4691-9041-6b91c6732721" containerID="0c799a3a6da12fe56f445cc2bc63763e7b3a85365e726404750df768df15a605" exitCode=0 Jan 27 06:53:20 crc kubenswrapper[4729]: I0127 06:53:20.949239 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwxfg" event={"ID":"967e206f-9b9e-4691-9041-6b91c6732721","Type":"ContainerDied","Data":"0c799a3a6da12fe56f445cc2bc63763e7b3a85365e726404750df768df15a605"} Jan 27 06:53:20 crc kubenswrapper[4729]: I0127 06:53:20.949264 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwxfg" event={"ID":"967e206f-9b9e-4691-9041-6b91c6732721","Type":"ContainerStarted","Data":"41ca4661471c35de385f6244bc2e964f7494cad3b214fcb89e73cf2724c3e1c9"} Jan 27 06:53:20 crc kubenswrapper[4729]: I0127 06:53:20.964248 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qgwph" podStartSLOduration=2.455420308 podStartE2EDuration="4.96423094s" podCreationTimestamp="2026-01-27 06:53:16 +0000 UTC" firstStartedPulling="2026-01-27 06:53:17.885284372 +0000 UTC m=+362.952405635" lastFinishedPulling="2026-01-27 06:53:20.394095004 +0000 UTC m=+365.461216267" observedRunningTime="2026-01-27 06:53:20.963349682 +0000 UTC m=+366.030470955" watchObservedRunningTime="2026-01-27 06:53:20.96423094 +0000 UTC m=+366.031352203" Jan 27 06:53:21 crc kubenswrapper[4729]: I0127 06:53:21.958953 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcqgr" event={"ID":"529eb2a1-6122-4897-90c9-3212a2de14e1","Type":"ContainerStarted","Data":"d5aacb7d7bce31cc955ea6a799091d4b1b8282fdfe9a0adff3738053879dfab0"} Jan 27 06:53:21 crc kubenswrapper[4729]: I0127 06:53:21.978831 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hcqgr" podStartSLOduration=2.5104016270000002 podStartE2EDuration="3.978816236s" podCreationTimestamp="2026-01-27 06:53:18 +0000 UTC" firstStartedPulling="2026-01-27 06:53:19.932663904 +0000 UTC m=+364.999785177" lastFinishedPulling="2026-01-27 06:53:21.401078523 +0000 UTC m=+366.468199786" observedRunningTime="2026-01-27 06:53:21.977574306 +0000 UTC m=+367.044695579" watchObservedRunningTime="2026-01-27 06:53:21.978816236 +0000 UTC m=+367.045937499" Jan 27 06:53:22 crc kubenswrapper[4729]: I0127 06:53:22.963828 4729 generic.go:334] "Generic (PLEG): container finished" podID="967e206f-9b9e-4691-9041-6b91c6732721" containerID="abe313c8b6781702737de608346eefffaa88f526702bbca5c6f33adddf8085b8" exitCode=0 Jan 27 06:53:22 crc kubenswrapper[4729]: I0127 06:53:22.965221 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwxfg" event={"ID":"967e206f-9b9e-4691-9041-6b91c6732721","Type":"ContainerDied","Data":"abe313c8b6781702737de608346eefffaa88f526702bbca5c6f33adddf8085b8"} Jan 27 06:53:23 crc kubenswrapper[4729]: I0127 06:53:23.973576 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwxfg" event={"ID":"967e206f-9b9e-4691-9041-6b91c6732721","Type":"ContainerStarted","Data":"b5c9542fe3d3c5f1ee7cfc4b67d203cece874cde320f516d4791245031bfe221"} Jan 27 06:53:24 crc kubenswrapper[4729]: I0127 06:53:24.001634 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pwxfg" podStartSLOduration=2.564012789 podStartE2EDuration="5.001619521s" podCreationTimestamp="2026-01-27 06:53:19 +0000 UTC" firstStartedPulling="2026-01-27 06:53:20.95095652 +0000 UTC m=+366.018077783" lastFinishedPulling="2026-01-27 06:53:23.388563252 +0000 UTC m=+368.455684515" observedRunningTime="2026-01-27 06:53:23.999486552 +0000 UTC m=+369.066607835" watchObservedRunningTime="2026-01-27 06:53:24.001619521 +0000 UTC m=+369.068740784" Jan 27 06:53:26 crc kubenswrapper[4729]: I0127 06:53:26.499612 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:26 crc kubenswrapper[4729]: I0127 06:53:26.500203 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:26 crc kubenswrapper[4729]: I0127 06:53:26.541292 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:27 crc kubenswrapper[4729]: I0127 06:53:27.027714 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qgwph" Jan 27 06:53:27 crc kubenswrapper[4729]: I0127 06:53:27.103413 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:27 crc kubenswrapper[4729]: I0127 06:53:27.103721 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:27 crc kubenswrapper[4729]: I0127 06:53:27.142663 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:28 crc kubenswrapper[4729]: I0127 06:53:28.033984 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2wb4g" Jan 27 06:53:28 crc kubenswrapper[4729]: I0127 06:53:28.898220 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:28 crc kubenswrapper[4729]: I0127 06:53:28.898300 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:28 crc kubenswrapper[4729]: I0127 06:53:28.940829 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:29 crc kubenswrapper[4729]: I0127 06:53:29.027799 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 06:53:29 crc kubenswrapper[4729]: I0127 06:53:29.506377 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:29 crc kubenswrapper[4729]: I0127 06:53:29.506674 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:29 crc kubenswrapper[4729]: I0127 06:53:29.542871 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:30 crc kubenswrapper[4729]: I0127 06:53:30.031278 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pwxfg" Jan 27 06:53:31 crc kubenswrapper[4729]: I0127 06:53:31.087214 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:53:31 crc kubenswrapper[4729]: I0127 06:53:31.087279 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:54:01 crc kubenswrapper[4729]: I0127 06:54:01.087336 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:54:01 crc kubenswrapper[4729]: I0127 06:54:01.088218 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:54:01 crc kubenswrapper[4729]: I0127 06:54:01.088287 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:54:01 crc kubenswrapper[4729]: I0127 06:54:01.089010 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30caa52735f0f0b43749c3d281fcec796c7cc4b6db375d2df8aa10df7c4af05b"} pod="openshift-machine-config-operator/machine-config-daemon-5x25t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 06:54:01 crc kubenswrapper[4729]: I0127 06:54:01.089158 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" containerID="cri-o://30caa52735f0f0b43749c3d281fcec796c7cc4b6db375d2df8aa10df7c4af05b" gracePeriod=600 Jan 27 06:54:01 crc kubenswrapper[4729]: I0127 06:54:01.223658 4729 generic.go:334] "Generic (PLEG): container finished" podID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerID="30caa52735f0f0b43749c3d281fcec796c7cc4b6db375d2df8aa10df7c4af05b" exitCode=0 Jan 27 06:54:01 crc kubenswrapper[4729]: I0127 06:54:01.223698 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerDied","Data":"30caa52735f0f0b43749c3d281fcec796c7cc4b6db375d2df8aa10df7c4af05b"} Jan 27 06:54:01 crc kubenswrapper[4729]: I0127 06:54:01.223728 4729 scope.go:117] "RemoveContainer" containerID="3a9cdc21a8cfdfd0b6da1d4b2b8469a4209ffb5a29a3bc9e4cd697a39b2188dd" Jan 27 06:54:02 crc kubenswrapper[4729]: I0127 06:54:02.233481 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerStarted","Data":"5bf0ca49a300f807dd9bcd1c99b9059912b76c5c995d8c8ce71426666ac706c5"} Jan 27 06:56:01 crc kubenswrapper[4729]: I0127 06:56:01.087814 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:56:01 crc kubenswrapper[4729]: I0127 06:56:01.088460 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:56:31 crc kubenswrapper[4729]: I0127 06:56:31.087169 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:56:31 crc kubenswrapper[4729]: I0127 06:56:31.087720 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:57:01 crc kubenswrapper[4729]: I0127 06:57:01.087928 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:57:01 crc kubenswrapper[4729]: I0127 06:57:01.089543 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:57:01 crc kubenswrapper[4729]: I0127 06:57:01.089945 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 06:57:01 crc kubenswrapper[4729]: I0127 06:57:01.092996 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bf0ca49a300f807dd9bcd1c99b9059912b76c5c995d8c8ce71426666ac706c5"} pod="openshift-machine-config-operator/machine-config-daemon-5x25t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 06:57:01 crc kubenswrapper[4729]: I0127 06:57:01.093128 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" containerID="cri-o://5bf0ca49a300f807dd9bcd1c99b9059912b76c5c995d8c8ce71426666ac706c5" gracePeriod=600 Jan 27 06:57:01 crc kubenswrapper[4729]: I0127 06:57:01.380317 4729 generic.go:334] "Generic (PLEG): container finished" podID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerID="5bf0ca49a300f807dd9bcd1c99b9059912b76c5c995d8c8ce71426666ac706c5" exitCode=0 Jan 27 06:57:01 crc kubenswrapper[4729]: I0127 06:57:01.380517 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerDied","Data":"5bf0ca49a300f807dd9bcd1c99b9059912b76c5c995d8c8ce71426666ac706c5"} Jan 27 06:57:01 crc kubenswrapper[4729]: I0127 06:57:01.380872 4729 scope.go:117] "RemoveContainer" containerID="30caa52735f0f0b43749c3d281fcec796c7cc4b6db375d2df8aa10df7c4af05b" Jan 27 06:57:02 crc kubenswrapper[4729]: I0127 06:57:02.388974 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerStarted","Data":"68bfd80bb6423f08d69b967fb2f7f062eecb3f011230ca94aaaa763bc9d7775b"} Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.703916 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hmvrf"] Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.705355 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.724790 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hmvrf"] Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.889862 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d765e459-1032-4891-a491-8e52636cbe80-registry-tls\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.889928 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d765e459-1032-4891-a491-8e52636cbe80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.889952 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d765e459-1032-4891-a491-8e52636cbe80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.890368 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d765e459-1032-4891-a491-8e52636cbe80-trusted-ca\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.890394 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d765e459-1032-4891-a491-8e52636cbe80-bound-sa-token\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.890413 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r85gt\" (UniqueName: \"kubernetes.io/projected/d765e459-1032-4891-a491-8e52636cbe80-kube-api-access-r85gt\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.890445 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.890466 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d765e459-1032-4891-a491-8e52636cbe80-registry-certificates\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.909854 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.991053 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d765e459-1032-4891-a491-8e52636cbe80-trusted-ca\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.991266 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d765e459-1032-4891-a491-8e52636cbe80-bound-sa-token\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.991364 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r85gt\" (UniqueName: \"kubernetes.io/projected/d765e459-1032-4891-a491-8e52636cbe80-kube-api-access-r85gt\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.991676 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d765e459-1032-4891-a491-8e52636cbe80-registry-certificates\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.991788 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d765e459-1032-4891-a491-8e52636cbe80-registry-tls\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.992638 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d765e459-1032-4891-a491-8e52636cbe80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.992728 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d765e459-1032-4891-a491-8e52636cbe80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.992287 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d765e459-1032-4891-a491-8e52636cbe80-trusted-ca\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.993173 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d765e459-1032-4891-a491-8e52636cbe80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.993925 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d765e459-1032-4891-a491-8e52636cbe80-registry-certificates\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.999251 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d765e459-1032-4891-a491-8e52636cbe80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:06 crc kubenswrapper[4729]: I0127 06:58:06.999861 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d765e459-1032-4891-a491-8e52636cbe80-registry-tls\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:07 crc kubenswrapper[4729]: I0127 06:58:07.012607 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r85gt\" (UniqueName: \"kubernetes.io/projected/d765e459-1032-4891-a491-8e52636cbe80-kube-api-access-r85gt\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:07 crc kubenswrapper[4729]: I0127 06:58:07.016043 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d765e459-1032-4891-a491-8e52636cbe80-bound-sa-token\") pod \"image-registry-66df7c8f76-hmvrf\" (UID: \"d765e459-1032-4891-a491-8e52636cbe80\") " pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:07 crc kubenswrapper[4729]: I0127 06:58:07.022853 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:07 crc kubenswrapper[4729]: I0127 06:58:07.316410 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hmvrf"] Jan 27 06:58:07 crc kubenswrapper[4729]: I0127 06:58:07.807062 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" event={"ID":"d765e459-1032-4891-a491-8e52636cbe80","Type":"ContainerStarted","Data":"ebc92f16226241219e207631bcfd70b57043992b57651b3ef05f7d9deebc324e"} Jan 27 06:58:07 crc kubenswrapper[4729]: I0127 06:58:07.807141 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" event={"ID":"d765e459-1032-4891-a491-8e52636cbe80","Type":"ContainerStarted","Data":"c533aa1c47dd552a034e5bb465439a10dd96f60c2311b481b7548cb0fb318420"} Jan 27 06:58:07 crc kubenswrapper[4729]: I0127 06:58:07.807301 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:27 crc kubenswrapper[4729]: I0127 06:58:27.030888 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" Jan 27 06:58:27 crc kubenswrapper[4729]: I0127 06:58:27.056843 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hmvrf" podStartSLOduration=21.056826671 podStartE2EDuration="21.056826671s" podCreationTimestamp="2026-01-27 06:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:58:07.839519183 +0000 UTC m=+652.906640486" watchObservedRunningTime="2026-01-27 06:58:27.056826671 +0000 UTC m=+672.123947954" Jan 27 06:58:27 crc kubenswrapper[4729]: I0127 06:58:27.103845 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxkmt"] Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.894892 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv"] Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.895894 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv" Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.903398 4729 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-64jn5" Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.903567 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.905926 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.920363 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-w6c52"] Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.921137 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-w6c52" Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.925774 4729 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-ts6sv" Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.933520 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv"] Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.943474 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bp54v"] Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.944141 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bp54v" Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.946888 4729 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wglcr" Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.951599 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-w6c52"] Jan 27 06:58:35 crc kubenswrapper[4729]: I0127 06:58:35.973490 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bp54v"] Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.002263 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrgh5\" (UniqueName: \"kubernetes.io/projected/7fbae4f4-f168-4142-9f19-6644887d053f-kube-api-access-vrgh5\") pod \"cert-manager-webhook-687f57d79b-bp54v\" (UID: \"7fbae4f4-f168-4142-9f19-6644887d053f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bp54v" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.002307 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cz5r\" (UniqueName: \"kubernetes.io/projected/71eeaf2d-639f-494a-8b0a-230e4c800a72-kube-api-access-2cz5r\") pod \"cert-manager-858654f9db-w6c52\" (UID: \"71eeaf2d-639f-494a-8b0a-230e4c800a72\") " pod="cert-manager/cert-manager-858654f9db-w6c52" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.002350 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fdmm\" (UniqueName: \"kubernetes.io/projected/7a77adf9-d502-450e-b5b7-8421a46b658c-kube-api-access-4fdmm\") pod \"cert-manager-cainjector-cf98fcc89-hsjzv\" (UID: \"7a77adf9-d502-450e-b5b7-8421a46b658c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.104063 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrgh5\" (UniqueName: \"kubernetes.io/projected/7fbae4f4-f168-4142-9f19-6644887d053f-kube-api-access-vrgh5\") pod \"cert-manager-webhook-687f57d79b-bp54v\" (UID: \"7fbae4f4-f168-4142-9f19-6644887d053f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bp54v" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.104127 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cz5r\" (UniqueName: \"kubernetes.io/projected/71eeaf2d-639f-494a-8b0a-230e4c800a72-kube-api-access-2cz5r\") pod \"cert-manager-858654f9db-w6c52\" (UID: \"71eeaf2d-639f-494a-8b0a-230e4c800a72\") " pod="cert-manager/cert-manager-858654f9db-w6c52" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.104158 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fdmm\" (UniqueName: \"kubernetes.io/projected/7a77adf9-d502-450e-b5b7-8421a46b658c-kube-api-access-4fdmm\") pod \"cert-manager-cainjector-cf98fcc89-hsjzv\" (UID: \"7a77adf9-d502-450e-b5b7-8421a46b658c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.128443 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrgh5\" (UniqueName: \"kubernetes.io/projected/7fbae4f4-f168-4142-9f19-6644887d053f-kube-api-access-vrgh5\") pod \"cert-manager-webhook-687f57d79b-bp54v\" (UID: \"7fbae4f4-f168-4142-9f19-6644887d053f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bp54v" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.130997 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fdmm\" (UniqueName: \"kubernetes.io/projected/7a77adf9-d502-450e-b5b7-8421a46b658c-kube-api-access-4fdmm\") pod \"cert-manager-cainjector-cf98fcc89-hsjzv\" (UID: \"7a77adf9-d502-450e-b5b7-8421a46b658c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.131650 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cz5r\" (UniqueName: \"kubernetes.io/projected/71eeaf2d-639f-494a-8b0a-230e4c800a72-kube-api-access-2cz5r\") pod \"cert-manager-858654f9db-w6c52\" (UID: \"71eeaf2d-639f-494a-8b0a-230e4c800a72\") " pod="cert-manager/cert-manager-858654f9db-w6c52" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.219552 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.233875 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-w6c52" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.272790 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bp54v" Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.478288 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv"] Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.497704 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.525955 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-w6c52"] Jan 27 06:58:36 crc kubenswrapper[4729]: W0127 06:58:36.542710 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71eeaf2d_639f_494a_8b0a_230e4c800a72.slice/crio-54eb609089d9efdb64ddcbaf9aae9a773e8975a03ef0bd79416991283cd0507e WatchSource:0}: Error finding container 54eb609089d9efdb64ddcbaf9aae9a773e8975a03ef0bd79416991283cd0507e: Status 404 returned error can't find the container with id 54eb609089d9efdb64ddcbaf9aae9a773e8975a03ef0bd79416991283cd0507e Jan 27 06:58:36 crc kubenswrapper[4729]: I0127 06:58:36.554538 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bp54v"] Jan 27 06:58:37 crc kubenswrapper[4729]: I0127 06:58:37.001572 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv" event={"ID":"7a77adf9-d502-450e-b5b7-8421a46b658c","Type":"ContainerStarted","Data":"608969a536d04d18a4871ccc34e22f6e6b3911905560842deed62b1b4cc28f6b"} Jan 27 06:58:37 crc kubenswrapper[4729]: I0127 06:58:37.002524 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-w6c52" event={"ID":"71eeaf2d-639f-494a-8b0a-230e4c800a72","Type":"ContainerStarted","Data":"54eb609089d9efdb64ddcbaf9aae9a773e8975a03ef0bd79416991283cd0507e"} Jan 27 06:58:37 crc kubenswrapper[4729]: I0127 06:58:37.003399 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bp54v" event={"ID":"7fbae4f4-f168-4142-9f19-6644887d053f","Type":"ContainerStarted","Data":"2ea01cb40c47927aab230532a32e910174fc267792a3f358b8bee2821fe837fc"} Jan 27 06:58:41 crc kubenswrapper[4729]: I0127 06:58:41.031985 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv" event={"ID":"7a77adf9-d502-450e-b5b7-8421a46b658c","Type":"ContainerStarted","Data":"562685fab4e46045561a2c2451b9ed68da21a0d3c3b73a53ccb2a58b91650e46"} Jan 27 06:58:41 crc kubenswrapper[4729]: I0127 06:58:41.034848 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-w6c52" event={"ID":"71eeaf2d-639f-494a-8b0a-230e4c800a72","Type":"ContainerStarted","Data":"0138b1fee761b19c1433f125784977cbaaada795d94840c06146a3397a176b91"} Jan 27 06:58:41 crc kubenswrapper[4729]: I0127 06:58:41.037094 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bp54v" event={"ID":"7fbae4f4-f168-4142-9f19-6644887d053f","Type":"ContainerStarted","Data":"599b7493938742746b97c36c4c8de4da572ee577d97f4b880f92fd596fdd2fd0"} Jan 27 06:58:41 crc kubenswrapper[4729]: I0127 06:58:41.037265 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-bp54v" Jan 27 06:58:41 crc kubenswrapper[4729]: I0127 06:58:41.054286 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsjzv" podStartSLOduration=2.242540901 podStartE2EDuration="6.054267273s" podCreationTimestamp="2026-01-27 06:58:35 +0000 UTC" firstStartedPulling="2026-01-27 06:58:36.497503826 +0000 UTC m=+681.564625089" lastFinishedPulling="2026-01-27 06:58:40.309230188 +0000 UTC m=+685.376351461" observedRunningTime="2026-01-27 06:58:41.050608736 +0000 UTC m=+686.117730079" watchObservedRunningTime="2026-01-27 06:58:41.054267273 +0000 UTC m=+686.121388536" Jan 27 06:58:41 crc kubenswrapper[4729]: I0127 06:58:41.081462 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-w6c52" podStartSLOduration=2.301499675 podStartE2EDuration="6.081439596s" podCreationTimestamp="2026-01-27 06:58:35 +0000 UTC" firstStartedPulling="2026-01-27 06:58:36.546620244 +0000 UTC m=+681.613741507" lastFinishedPulling="2026-01-27 06:58:40.326560125 +0000 UTC m=+685.393681428" observedRunningTime="2026-01-27 06:58:41.078507942 +0000 UTC m=+686.145629225" watchObservedRunningTime="2026-01-27 06:58:41.081439596 +0000 UTC m=+686.148560859" Jan 27 06:58:41 crc kubenswrapper[4729]: I0127 06:58:41.126782 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-bp54v" podStartSLOduration=2.367946309 podStartE2EDuration="6.126761902s" podCreationTimestamp="2026-01-27 06:58:35 +0000 UTC" firstStartedPulling="2026-01-27 06:58:36.559670723 +0000 UTC m=+681.626791986" lastFinishedPulling="2026-01-27 06:58:40.318486306 +0000 UTC m=+685.385607579" observedRunningTime="2026-01-27 06:58:41.123840858 +0000 UTC m=+686.190962131" watchObservedRunningTime="2026-01-27 06:58:41.126761902 +0000 UTC m=+686.193883165" Jan 27 06:58:45 crc kubenswrapper[4729]: I0127 06:58:45.729359 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-95wgz"] Jan 27 06:58:45 crc kubenswrapper[4729]: I0127 06:58:45.730479 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovn-controller" containerID="cri-o://2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461" gracePeriod=30 Jan 27 06:58:45 crc kubenswrapper[4729]: I0127 06:58:45.730655 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="sbdb" containerID="cri-o://eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4" gracePeriod=30 Jan 27 06:58:45 crc kubenswrapper[4729]: I0127 06:58:45.730726 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="nbdb" containerID="cri-o://d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20" gracePeriod=30 Jan 27 06:58:45 crc kubenswrapper[4729]: I0127 06:58:45.730782 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="northd" containerID="cri-o://c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb" gracePeriod=30 Jan 27 06:58:45 crc kubenswrapper[4729]: I0127 06:58:45.730832 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b" gracePeriod=30 Jan 27 06:58:45 crc kubenswrapper[4729]: I0127 06:58:45.730881 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="kube-rbac-proxy-node" containerID="cri-o://2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c" gracePeriod=30 Jan 27 06:58:45 crc kubenswrapper[4729]: I0127 06:58:45.730939 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovn-acl-logging" containerID="cri-o://1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716" gracePeriod=30 Jan 27 06:58:45 crc kubenswrapper[4729]: I0127 06:58:45.801522 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" containerID="cri-o://cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf" gracePeriod=30 Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.015727 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/3.log" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.017352 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovn-acl-logging/0.log" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.017844 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovn-controller/0.log" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.019169 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.071732 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45zq7_15e81784-44b6-45c7-a893-4b38366a1b5e/kube-multus/2.log" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.073387 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45zq7_15e81784-44b6-45c7-a893-4b38366a1b5e/kube-multus/1.log" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.073588 4729 generic.go:334] "Generic (PLEG): container finished" podID="15e81784-44b6-45c7-a893-4b38366a1b5e" containerID="869eed51b4107feb9931f17a3753b814dcb492a499998ae5f14b6ef9d78d056e" exitCode=2 Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.073647 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45zq7" event={"ID":"15e81784-44b6-45c7-a893-4b38366a1b5e","Type":"ContainerDied","Data":"869eed51b4107feb9931f17a3753b814dcb492a499998ae5f14b6ef9d78d056e"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.073887 4729 scope.go:117] "RemoveContainer" containerID="b5f9b60cf226acea6d347a1512ed26a090d6de28dbfeececf0afc5ea16d58e39" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.074929 4729 scope.go:117] "RemoveContainer" containerID="869eed51b4107feb9931f17a3753b814dcb492a499998ae5f14b6ef9d78d056e" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.075784 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-45zq7_openshift-multus(15e81784-44b6-45c7-a893-4b38366a1b5e)\"" pod="openshift-multus/multus-45zq7" podUID="15e81784-44b6-45c7-a893-4b38366a1b5e" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.076687 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-95qnn"] Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.076903 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="nbdb" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.076916 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="nbdb" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.076928 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.076935 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.076946 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.076953 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.076965 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="kubecfg-setup" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.076972 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="kubecfg-setup" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.076983 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovn-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.076991 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovn-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.077004 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovn-acl-logging" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077011 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovn-acl-logging" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.077021 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="sbdb" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077028 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="sbdb" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.077038 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="northd" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077045 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="northd" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.077056 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077124 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.077140 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="kube-rbac-proxy-node" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077148 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="kube-rbac-proxy-node" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.077161 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077168 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077277 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077291 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovn-acl-logging" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077301 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="northd" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077310 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077318 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="nbdb" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077326 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="kube-rbac-proxy-node" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077334 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077343 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovn-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077356 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077364 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="sbdb" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.077471 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077481 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.077497 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077504 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077606 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.077621 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerName="ovnkube-controller" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.078027 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovnkube-controller/3.log" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.082019 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovn-acl-logging/0.log" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.082547 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-95wgz_f4dbf50d-949f-4203-873a-7ced1d5a5015/ovn-controller/0.log" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.082863 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083111 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf" exitCode=0 Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083210 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4" exitCode=0 Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083278 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20" exitCode=0 Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083357 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb" exitCode=0 Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083425 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b" exitCode=0 Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083489 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c" exitCode=0 Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083553 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716" exitCode=143 Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083621 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4dbf50d-949f-4203-873a-7ced1d5a5015" containerID="2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461" exitCode=143 Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083179 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083776 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083856 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083938 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084019 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084129 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084249 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084325 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084396 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084475 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084558 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084636 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084713 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084784 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084844 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084911 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.083164 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.084979 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085145 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085163 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085170 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085177 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085184 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085191 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085198 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085205 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085212 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085219 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085233 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085247 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085258 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085269 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085276 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085284 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085291 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085298 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085304 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085311 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085317 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085326 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95wgz" event={"ID":"f4dbf50d-949f-4203-873a-7ced1d5a5015","Type":"ContainerDied","Data":"67fa9c5098bd53dc85612cba4b388aa52b6b6572fc25b5fbeca9e7f8a6adc162"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085339 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085346 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085352 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085358 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085364 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085370 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085376 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085385 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085391 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.085397 4729 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7"} Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.110572 4729 scope.go:117] "RemoveContainer" containerID="cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.131163 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.148302 4729 scope.go:117] "RemoveContainer" containerID="eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.148936 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovn-node-metrics-cert\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.148993 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-config\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149014 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-node-log\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149039 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-systemd\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149064 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-var-lib-openvswitch\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149094 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-ovn\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149107 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-netd\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149123 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-log-socket\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149117 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-node-log" (OuterVolumeSpecName: "node-log") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149138 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-systemd-units\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149184 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149206 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-env-overrides\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149235 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-bin\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149256 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-var-lib-cni-networks-ovn-kubernetes\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149274 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-script-lib\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149319 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-etc-openvswitch\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149337 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-netns\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149351 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-ovn-kubernetes\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149374 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-openvswitch\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149387 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-slash\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149424 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgfn5\" (UniqueName: \"kubernetes.io/projected/f4dbf50d-949f-4203-873a-7ced1d5a5015-kube-api-access-wgfn5\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149451 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-kubelet\") pod \"f4dbf50d-949f-4203-873a-7ced1d5a5015\" (UID: \"f4dbf50d-949f-4203-873a-7ced1d5a5015\") " Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149619 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-kubelet\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149652 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-run-ovn-kubernetes\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149679 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-cni-bin\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149697 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-slash\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149717 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-etc-openvswitch\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149740 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149748 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149772 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149785 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-run-ovn\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149795 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149811 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-log-socket" (OuterVolumeSpecName: "log-socket") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149827 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-slash" (OuterVolumeSpecName: "host-slash") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149842 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-run-systemd\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.149993 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2gj5\" (UniqueName: \"kubernetes.io/projected/635e17f0-c9c9-42f7-a941-09801d998018-kube-api-access-h2gj5\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150034 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/635e17f0-c9c9-42f7-a941-09801d998018-env-overrides\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150080 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-node-log\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150113 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-log-socket\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150136 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-var-lib-openvswitch\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150152 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-run-netns\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150167 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/635e17f0-c9c9-42f7-a941-09801d998018-ovnkube-script-lib\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150184 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/635e17f0-c9c9-42f7-a941-09801d998018-ovnkube-config\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150205 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150215 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/635e17f0-c9c9-42f7-a941-09801d998018-ovn-node-metrics-cert\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150233 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-run-openvswitch\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150251 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-systemd-units\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150270 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-cni-netd\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150311 4729 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150321 4729 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150330 4729 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150340 4729 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150348 4729 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150356 4729 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150364 4729 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150372 4729 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150484 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150515 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150538 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150795 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150821 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150841 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150861 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150879 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.150910 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.154355 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.155309 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4dbf50d-949f-4203-873a-7ced1d5a5015-kube-api-access-wgfn5" (OuterVolumeSpecName: "kube-api-access-wgfn5") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "kube-api-access-wgfn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.163721 4729 scope.go:117] "RemoveContainer" containerID="d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.165493 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "f4dbf50d-949f-4203-873a-7ced1d5a5015" (UID: "f4dbf50d-949f-4203-873a-7ced1d5a5015"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.173347 4729 scope.go:117] "RemoveContainer" containerID="c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.185090 4729 scope.go:117] "RemoveContainer" containerID="b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.197926 4729 scope.go:117] "RemoveContainer" containerID="2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.207807 4729 scope.go:117] "RemoveContainer" containerID="1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.219160 4729 scope.go:117] "RemoveContainer" containerID="2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.237536 4729 scope.go:117] "RemoveContainer" containerID="233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.251888 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-var-lib-openvswitch\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.251924 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/635e17f0-c9c9-42f7-a941-09801d998018-ovnkube-script-lib\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.251944 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-run-netns\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.251961 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/635e17f0-c9c9-42f7-a941-09801d998018-ovnkube-config\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.251988 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-run-openvswitch\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252008 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/635e17f0-c9c9-42f7-a941-09801d998018-ovn-node-metrics-cert\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252031 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-systemd-units\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252031 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-var-lib-openvswitch\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252053 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-cni-netd\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252095 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-kubelet\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252119 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-run-ovn-kubernetes\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252138 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-cni-bin\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252155 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-slash\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252169 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-etc-openvswitch\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252172 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-run-netns\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252187 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252218 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-run-ovn\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252240 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-run-systemd\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252274 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2gj5\" (UniqueName: \"kubernetes.io/projected/635e17f0-c9c9-42f7-a941-09801d998018-kube-api-access-h2gj5\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252293 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/635e17f0-c9c9-42f7-a941-09801d998018-env-overrides\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252312 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-node-log\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252326 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-log-socket\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252383 4729 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252394 4729 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252404 4729 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252414 4729 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252423 4729 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252432 4729 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252441 4729 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252450 4729 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252459 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgfn5\" (UniqueName: \"kubernetes.io/projected/f4dbf50d-949f-4203-873a-7ced1d5a5015-kube-api-access-wgfn5\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252468 4729 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252478 4729 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4dbf50d-949f-4203-873a-7ced1d5a5015-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252486 4729 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f4dbf50d-949f-4203-873a-7ced1d5a5015-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252518 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-log-socket\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252573 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/635e17f0-c9c9-42f7-a941-09801d998018-ovnkube-script-lib\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252610 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-slash\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252634 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-systemd-units\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252655 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-cni-netd\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252675 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-kubelet\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252693 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-run-ovn-kubernetes\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252713 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-cni-bin\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252132 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-run-openvswitch\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252743 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-run-systemd\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252761 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-etc-openvswitch\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252782 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.252805 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-run-ovn\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.253009 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/635e17f0-c9c9-42f7-a941-09801d998018-ovnkube-config\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.253107 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/635e17f0-c9c9-42f7-a941-09801d998018-node-log\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.253176 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/635e17f0-c9c9-42f7-a941-09801d998018-env-overrides\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.254242 4729 scope.go:117] "RemoveContainer" containerID="cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.255539 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf\": container with ID starting with cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf not found: ID does not exist" containerID="cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.255578 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf"} err="failed to get container status \"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf\": rpc error: code = NotFound desc = could not find container \"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf\": container with ID starting with cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.255605 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.255969 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\": container with ID starting with 7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b not found: ID does not exist" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.256001 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b"} err="failed to get container status \"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\": rpc error: code = NotFound desc = could not find container \"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\": container with ID starting with 7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.256019 4729 scope.go:117] "RemoveContainer" containerID="eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.256400 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\": container with ID starting with eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4 not found: ID does not exist" containerID="eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.256429 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4"} err="failed to get container status \"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\": rpc error: code = NotFound desc = could not find container \"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\": container with ID starting with eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.256457 4729 scope.go:117] "RemoveContainer" containerID="d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.256467 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/635e17f0-c9c9-42f7-a941-09801d998018-ovn-node-metrics-cert\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.256794 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\": container with ID starting with d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20 not found: ID does not exist" containerID="d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.256820 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20"} err="failed to get container status \"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\": rpc error: code = NotFound desc = could not find container \"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\": container with ID starting with d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.256834 4729 scope.go:117] "RemoveContainer" containerID="c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.257166 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\": container with ID starting with c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb not found: ID does not exist" containerID="c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.257199 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb"} err="failed to get container status \"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\": rpc error: code = NotFound desc = could not find container \"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\": container with ID starting with c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.257220 4729 scope.go:117] "RemoveContainer" containerID="b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.257762 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\": container with ID starting with b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b not found: ID does not exist" containerID="b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.257787 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b"} err="failed to get container status \"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\": rpc error: code = NotFound desc = could not find container \"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\": container with ID starting with b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.257801 4729 scope.go:117] "RemoveContainer" containerID="2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.258090 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\": container with ID starting with 2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c not found: ID does not exist" containerID="2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.258122 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c"} err="failed to get container status \"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\": rpc error: code = NotFound desc = could not find container \"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\": container with ID starting with 2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.258145 4729 scope.go:117] "RemoveContainer" containerID="1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.258433 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\": container with ID starting with 1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716 not found: ID does not exist" containerID="1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.258459 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716"} err="failed to get container status \"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\": rpc error: code = NotFound desc = could not find container \"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\": container with ID starting with 1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.258476 4729 scope.go:117] "RemoveContainer" containerID="2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.258787 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\": container with ID starting with 2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461 not found: ID does not exist" containerID="2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.258820 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461"} err="failed to get container status \"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\": rpc error: code = NotFound desc = could not find container \"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\": container with ID starting with 2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.258840 4729 scope.go:117] "RemoveContainer" containerID="233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7" Jan 27 06:58:46 crc kubenswrapper[4729]: E0127 06:58:46.262238 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\": container with ID starting with 233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7 not found: ID does not exist" containerID="233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.262270 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7"} err="failed to get container status \"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\": rpc error: code = NotFound desc = could not find container \"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\": container with ID starting with 233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.262288 4729 scope.go:117] "RemoveContainer" containerID="cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.262801 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf"} err="failed to get container status \"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf\": rpc error: code = NotFound desc = could not find container \"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf\": container with ID starting with cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.262829 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.263168 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b"} err="failed to get container status \"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\": rpc error: code = NotFound desc = could not find container \"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\": container with ID starting with 7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.263206 4729 scope.go:117] "RemoveContainer" containerID="eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.263496 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4"} err="failed to get container status \"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\": rpc error: code = NotFound desc = could not find container \"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\": container with ID starting with eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.263522 4729 scope.go:117] "RemoveContainer" containerID="d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.263797 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20"} err="failed to get container status \"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\": rpc error: code = NotFound desc = could not find container \"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\": container with ID starting with d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.263823 4729 scope.go:117] "RemoveContainer" containerID="c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.264112 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb"} err="failed to get container status \"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\": rpc error: code = NotFound desc = could not find container \"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\": container with ID starting with c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.264144 4729 scope.go:117] "RemoveContainer" containerID="b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.264395 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b"} err="failed to get container status \"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\": rpc error: code = NotFound desc = could not find container \"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\": container with ID starting with b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.264417 4729 scope.go:117] "RemoveContainer" containerID="2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.264641 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c"} err="failed to get container status \"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\": rpc error: code = NotFound desc = could not find container \"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\": container with ID starting with 2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.264666 4729 scope.go:117] "RemoveContainer" containerID="1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.264967 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716"} err="failed to get container status \"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\": rpc error: code = NotFound desc = could not find container \"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\": container with ID starting with 1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.264996 4729 scope.go:117] "RemoveContainer" containerID="2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.265273 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461"} err="failed to get container status \"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\": rpc error: code = NotFound desc = could not find container \"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\": container with ID starting with 2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.265298 4729 scope.go:117] "RemoveContainer" containerID="233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.265523 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7"} err="failed to get container status \"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\": rpc error: code = NotFound desc = could not find container \"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\": container with ID starting with 233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.265549 4729 scope.go:117] "RemoveContainer" containerID="cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.265782 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf"} err="failed to get container status \"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf\": rpc error: code = NotFound desc = could not find container \"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf\": container with ID starting with cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.265821 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.266045 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b"} err="failed to get container status \"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\": rpc error: code = NotFound desc = could not find container \"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\": container with ID starting with 7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.266101 4729 scope.go:117] "RemoveContainer" containerID="eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.266330 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4"} err="failed to get container status \"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\": rpc error: code = NotFound desc = could not find container \"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\": container with ID starting with eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.266350 4729 scope.go:117] "RemoveContainer" containerID="d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.266549 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20"} err="failed to get container status \"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\": rpc error: code = NotFound desc = could not find container \"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\": container with ID starting with d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.266567 4729 scope.go:117] "RemoveContainer" containerID="c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.266753 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb"} err="failed to get container status \"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\": rpc error: code = NotFound desc = could not find container \"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\": container with ID starting with c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.266770 4729 scope.go:117] "RemoveContainer" containerID="b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.266952 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b"} err="failed to get container status \"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\": rpc error: code = NotFound desc = could not find container \"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\": container with ID starting with b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.266970 4729 scope.go:117] "RemoveContainer" containerID="2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.267160 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c"} err="failed to get container status \"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\": rpc error: code = NotFound desc = could not find container \"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\": container with ID starting with 2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.267176 4729 scope.go:117] "RemoveContainer" containerID="1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.267350 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716"} err="failed to get container status \"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\": rpc error: code = NotFound desc = could not find container \"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\": container with ID starting with 1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.267367 4729 scope.go:117] "RemoveContainer" containerID="2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.267546 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461"} err="failed to get container status \"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\": rpc error: code = NotFound desc = could not find container \"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\": container with ID starting with 2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.267564 4729 scope.go:117] "RemoveContainer" containerID="233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.267737 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7"} err="failed to get container status \"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\": rpc error: code = NotFound desc = could not find container \"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\": container with ID starting with 233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.267753 4729 scope.go:117] "RemoveContainer" containerID="cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.267932 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf"} err="failed to get container status \"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf\": rpc error: code = NotFound desc = could not find container \"cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf\": container with ID starting with cdc9f3d6c98962b8d266249293c70499c71a1dcdd94401801cb2edc9f7416dcf not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.267948 4729 scope.go:117] "RemoveContainer" containerID="7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.268127 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b"} err="failed to get container status \"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\": rpc error: code = NotFound desc = could not find container \"7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b\": container with ID starting with 7d1cd36e2047ef782434b8282e0fb342c8e1d0b1bf8e95d10c3183e359ee627b not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.268143 4729 scope.go:117] "RemoveContainer" containerID="eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.268343 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4"} err="failed to get container status \"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\": rpc error: code = NotFound desc = could not find container \"eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4\": container with ID starting with eff45bf03bbe530fc8002c63d78ead08fe27fa90b179ad63e757cb7a620844f4 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.268359 4729 scope.go:117] "RemoveContainer" containerID="d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.268539 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20"} err="failed to get container status \"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\": rpc error: code = NotFound desc = could not find container \"d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20\": container with ID starting with d48b8000f066bd798d51b3627be25d6bef979a030315c79aa021aef7122c8c20 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.268555 4729 scope.go:117] "RemoveContainer" containerID="c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.268735 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb"} err="failed to get container status \"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\": rpc error: code = NotFound desc = could not find container \"c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb\": container with ID starting with c194eca0df177039ef474fc78d5efee031b57443e2c9adca053a3e545a73fabb not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.268751 4729 scope.go:117] "RemoveContainer" containerID="b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.268932 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b"} err="failed to get container status \"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\": rpc error: code = NotFound desc = could not find container \"b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b\": container with ID starting with b398219af4705fb1fefe7eeab454a01e8e41e15a56e0c8571abdff0f18a6025b not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.268950 4729 scope.go:117] "RemoveContainer" containerID="2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.269133 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c"} err="failed to get container status \"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\": rpc error: code = NotFound desc = could not find container \"2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c\": container with ID starting with 2a7a4c1e2cda7b2e9b11d87f7c43a2149e8f36048f51101854411a4a2c8ca81c not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.269158 4729 scope.go:117] "RemoveContainer" containerID="1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.269340 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716"} err="failed to get container status \"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\": rpc error: code = NotFound desc = could not find container \"1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716\": container with ID starting with 1fa8b34922c3baa55dc80908564a6633fc556efc890f7543a114e31821bdc716 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.269357 4729 scope.go:117] "RemoveContainer" containerID="2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.269535 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461"} err="failed to get container status \"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\": rpc error: code = NotFound desc = could not find container \"2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461\": container with ID starting with 2a71cc53c10654c164f2b47cac31d90f19ac12c79da8f02cc11572bd28c51461 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.269551 4729 scope.go:117] "RemoveContainer" containerID="233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.269731 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7"} err="failed to get container status \"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\": rpc error: code = NotFound desc = could not find container \"233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7\": container with ID starting with 233ad0f323232a95ee0c6faeb5241d2714c2b911569864d0c5ecae7bae9a1ab7 not found: ID does not exist" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.275822 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-bp54v" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.298366 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2gj5\" (UniqueName: \"kubernetes.io/projected/635e17f0-c9c9-42f7-a941-09801d998018-kube-api-access-h2gj5\") pod \"ovnkube-node-95qnn\" (UID: \"635e17f0-c9c9-42f7-a941-09801d998018\") " pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.409311 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.440920 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-95wgz"] Jan 27 06:58:46 crc kubenswrapper[4729]: I0127 06:58:46.448438 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-95wgz"] Jan 27 06:58:47 crc kubenswrapper[4729]: I0127 06:58:47.093606 4729 generic.go:334] "Generic (PLEG): container finished" podID="635e17f0-c9c9-42f7-a941-09801d998018" containerID="4717889b1dd2ca5b9b0bfc8c6b197d6287bbe8eb722e7a91160becae3b78f040" exitCode=0 Jan 27 06:58:47 crc kubenswrapper[4729]: I0127 06:58:47.093735 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" event={"ID":"635e17f0-c9c9-42f7-a941-09801d998018","Type":"ContainerDied","Data":"4717889b1dd2ca5b9b0bfc8c6b197d6287bbe8eb722e7a91160becae3b78f040"} Jan 27 06:58:47 crc kubenswrapper[4729]: I0127 06:58:47.094155 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" event={"ID":"635e17f0-c9c9-42f7-a941-09801d998018","Type":"ContainerStarted","Data":"6a4098230c2285d8d3c08493c1a57c8948bbf61aae292925168ddad828e87c78"} Jan 27 06:58:47 crc kubenswrapper[4729]: I0127 06:58:47.102589 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45zq7_15e81784-44b6-45c7-a893-4b38366a1b5e/kube-multus/2.log" Jan 27 06:58:48 crc kubenswrapper[4729]: I0127 06:58:48.114761 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" event={"ID":"635e17f0-c9c9-42f7-a941-09801d998018","Type":"ContainerStarted","Data":"b5166ee49cfbe3e4f82b2379a952e99bd80ae7d178a67c302d2b43cbd4f86d6a"} Jan 27 06:58:48 crc kubenswrapper[4729]: I0127 06:58:48.115104 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" event={"ID":"635e17f0-c9c9-42f7-a941-09801d998018","Type":"ContainerStarted","Data":"e01d99c058d1e5172ab50314e1d9f4e8eaab75d5524f784dca8767aed1490dc0"} Jan 27 06:58:48 crc kubenswrapper[4729]: I0127 06:58:48.115127 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" event={"ID":"635e17f0-c9c9-42f7-a941-09801d998018","Type":"ContainerStarted","Data":"3234be091ddb0f80d30a5fa38b89603ca24eb8515c853d3b2d3070a2b432fed0"} Jan 27 06:58:48 crc kubenswrapper[4729]: I0127 06:58:48.115146 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" event={"ID":"635e17f0-c9c9-42f7-a941-09801d998018","Type":"ContainerStarted","Data":"2f93f636b7511e84a346961fd6a3a959e2768b9ce46b5481592b5165fb578cb2"} Jan 27 06:58:48 crc kubenswrapper[4729]: I0127 06:58:48.115164 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" event={"ID":"635e17f0-c9c9-42f7-a941-09801d998018","Type":"ContainerStarted","Data":"ddbcbcb4692f17dcc43247f4b523527410c985d9f05eb49d8078f7f01d71eb0c"} Jan 27 06:58:48 crc kubenswrapper[4729]: I0127 06:58:48.115181 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" event={"ID":"635e17f0-c9c9-42f7-a941-09801d998018","Type":"ContainerStarted","Data":"edaa710a4c4b3f4c3239e68c7f4772beedc475b60ec5a09330e9b1e51e8d92a5"} Jan 27 06:58:48 crc kubenswrapper[4729]: I0127 06:58:48.369812 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4dbf50d-949f-4203-873a-7ced1d5a5015" path="/var/lib/kubelet/pods/f4dbf50d-949f-4203-873a-7ced1d5a5015/volumes" Jan 27 06:58:51 crc kubenswrapper[4729]: I0127 06:58:51.143325 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" event={"ID":"635e17f0-c9c9-42f7-a941-09801d998018","Type":"ContainerStarted","Data":"c2d04d0c5e14066996d48a36d27d32e04393b5244d654d44e6a5afc54e66bb8a"} Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.164642 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" podUID="1e2c5da4-8ac5-4e80-b351-feffc47032e6" containerName="registry" containerID="cri-o://6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f" gracePeriod=30 Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.431975 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.539885 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7zvh\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-kube-api-access-z7zvh\") pod \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.539963 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e2c5da4-8ac5-4e80-b351-feffc47032e6-installation-pull-secrets\") pod \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.540001 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-trusted-ca\") pod \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.540047 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-tls\") pod \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.540246 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.540311 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e2c5da4-8ac5-4e80-b351-feffc47032e6-ca-trust-extracted\") pod \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.540349 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-certificates\") pod \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.540388 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-bound-sa-token\") pod \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\" (UID: \"1e2c5da4-8ac5-4e80-b351-feffc47032e6\") " Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.541646 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1e2c5da4-8ac5-4e80-b351-feffc47032e6" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.541750 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1e2c5da4-8ac5-4e80-b351-feffc47032e6" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.546429 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2c5da4-8ac5-4e80-b351-feffc47032e6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1e2c5da4-8ac5-4e80-b351-feffc47032e6" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.547980 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1e2c5da4-8ac5-4e80-b351-feffc47032e6" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.549594 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1e2c5da4-8ac5-4e80-b351-feffc47032e6" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.549625 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-kube-api-access-z7zvh" (OuterVolumeSpecName: "kube-api-access-z7zvh") pod "1e2c5da4-8ac5-4e80-b351-feffc47032e6" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6"). InnerVolumeSpecName "kube-api-access-z7zvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.553948 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1e2c5da4-8ac5-4e80-b351-feffc47032e6" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.561897 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2c5da4-8ac5-4e80-b351-feffc47032e6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1e2c5da4-8ac5-4e80-b351-feffc47032e6" (UID: "1e2c5da4-8ac5-4e80-b351-feffc47032e6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.642037 4729 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e2c5da4-8ac5-4e80-b351-feffc47032e6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.642116 4729 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.642136 4729 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.642152 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7zvh\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-kube-api-access-z7zvh\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.642169 4729 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e2c5da4-8ac5-4e80-b351-feffc47032e6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.642185 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e2c5da4-8ac5-4e80-b351-feffc47032e6-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:52 crc kubenswrapper[4729]: I0127 06:58:52.642200 4729 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e2c5da4-8ac5-4e80-b351-feffc47032e6-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.158152 4729 generic.go:334] "Generic (PLEG): container finished" podID="1e2c5da4-8ac5-4e80-b351-feffc47032e6" containerID="6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f" exitCode=0 Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.158185 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.158208 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" event={"ID":"1e2c5da4-8ac5-4e80-b351-feffc47032e6","Type":"ContainerDied","Data":"6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f"} Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.158246 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rxkmt" event={"ID":"1e2c5da4-8ac5-4e80-b351-feffc47032e6","Type":"ContainerDied","Data":"2c761a807db84fc14dd856cb41d074935470bc408581340a40589eccf6b7fa9d"} Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.158263 4729 scope.go:117] "RemoveContainer" containerID="6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.163817 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" event={"ID":"635e17f0-c9c9-42f7-a941-09801d998018","Type":"ContainerStarted","Data":"91e7f312fde88998027e45ca8c70ca81b0cbf11f2495f59de378d5b613ca046a"} Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.165336 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.165370 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.165382 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.178294 4729 scope.go:117] "RemoveContainer" containerID="6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f" Jan 27 06:58:53 crc kubenswrapper[4729]: E0127 06:58:53.178665 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f\": container with ID starting with 6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f not found: ID does not exist" containerID="6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.178696 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f"} err="failed to get container status \"6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f\": rpc error: code = NotFound desc = could not find container \"6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f\": container with ID starting with 6191215d112920dac0d27b4637cc8344fc91f23f1e39837360f4a6280fd86e4f not found: ID does not exist" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.193574 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.194366 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.202050 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" podStartSLOduration=7.202033103 podStartE2EDuration="7.202033103s" podCreationTimestamp="2026-01-27 06:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:58:53.198034494 +0000 UTC m=+698.265155777" watchObservedRunningTime="2026-01-27 06:58:53.202033103 +0000 UTC m=+698.269154366" Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.271108 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxkmt"] Jan 27 06:58:53 crc kubenswrapper[4729]: I0127 06:58:53.275917 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxkmt"] Jan 27 06:58:54 crc kubenswrapper[4729]: I0127 06:58:54.374801 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e2c5da4-8ac5-4e80-b351-feffc47032e6" path="/var/lib/kubelet/pods/1e2c5da4-8ac5-4e80-b351-feffc47032e6/volumes" Jan 27 06:58:58 crc kubenswrapper[4729]: I0127 06:58:58.362960 4729 scope.go:117] "RemoveContainer" containerID="869eed51b4107feb9931f17a3753b814dcb492a499998ae5f14b6ef9d78d056e" Jan 27 06:58:58 crc kubenswrapper[4729]: E0127 06:58:58.363640 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-45zq7_openshift-multus(15e81784-44b6-45c7-a893-4b38366a1b5e)\"" pod="openshift-multus/multus-45zq7" podUID="15e81784-44b6-45c7-a893-4b38366a1b5e" Jan 27 06:59:01 crc kubenswrapper[4729]: I0127 06:59:01.088034 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:59:01 crc kubenswrapper[4729]: I0127 06:59:01.089260 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:59:10 crc kubenswrapper[4729]: I0127 06:59:10.362796 4729 scope.go:117] "RemoveContainer" containerID="869eed51b4107feb9931f17a3753b814dcb492a499998ae5f14b6ef9d78d056e" Jan 27 06:59:11 crc kubenswrapper[4729]: I0127 06:59:11.281573 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-45zq7_15e81784-44b6-45c7-a893-4b38366a1b5e/kube-multus/2.log" Jan 27 06:59:11 crc kubenswrapper[4729]: I0127 06:59:11.282001 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-45zq7" event={"ID":"15e81784-44b6-45c7-a893-4b38366a1b5e","Type":"ContainerStarted","Data":"f6b1d3ef6b917b99382c87b3f2b19842803de144b76586a53a768b98550fe305"} Jan 27 06:59:16 crc kubenswrapper[4729]: I0127 06:59:16.445692 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-95qnn" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.749106 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs"] Jan 27 06:59:28 crc kubenswrapper[4729]: E0127 06:59:28.749683 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2c5da4-8ac5-4e80-b351-feffc47032e6" containerName="registry" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.749694 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2c5da4-8ac5-4e80-b351-feffc47032e6" containerName="registry" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.749782 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2c5da4-8ac5-4e80-b351-feffc47032e6" containerName="registry" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.750456 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.752865 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.760922 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs"] Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.884250 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjtsx\" (UniqueName: \"kubernetes.io/projected/11342f63-e747-4052-a4e3-f38d99033488-kube-api-access-mjtsx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.884311 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.884782 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.986049 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.986311 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.986468 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjtsx\" (UniqueName: \"kubernetes.io/projected/11342f63-e747-4052-a4e3-f38d99033488-kube-api-access-mjtsx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.986477 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:28 crc kubenswrapper[4729]: I0127 06:59:28.986744 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:29 crc kubenswrapper[4729]: I0127 06:59:29.006608 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjtsx\" (UniqueName: \"kubernetes.io/projected/11342f63-e747-4052-a4e3-f38d99033488-kube-api-access-mjtsx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:29 crc kubenswrapper[4729]: I0127 06:59:29.101489 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:29 crc kubenswrapper[4729]: I0127 06:59:29.526337 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs"] Jan 27 06:59:29 crc kubenswrapper[4729]: W0127 06:59:29.535484 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11342f63_e747_4052_a4e3_f38d99033488.slice/crio-bf2e89e82734e278655aeed34c3d4bffab4a7885b59b6716084233f146073ff9 WatchSource:0}: Error finding container bf2e89e82734e278655aeed34c3d4bffab4a7885b59b6716084233f146073ff9: Status 404 returned error can't find the container with id bf2e89e82734e278655aeed34c3d4bffab4a7885b59b6716084233f146073ff9 Jan 27 06:59:30 crc kubenswrapper[4729]: I0127 06:59:30.425259 4729 generic.go:334] "Generic (PLEG): container finished" podID="11342f63-e747-4052-a4e3-f38d99033488" containerID="c040e51eae5a22704c988d9d9be0ce6ea038a9160054d15f8d6f2c64e2ab9fbf" exitCode=0 Jan 27 06:59:30 crc kubenswrapper[4729]: I0127 06:59:30.425329 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" event={"ID":"11342f63-e747-4052-a4e3-f38d99033488","Type":"ContainerDied","Data":"c040e51eae5a22704c988d9d9be0ce6ea038a9160054d15f8d6f2c64e2ab9fbf"} Jan 27 06:59:30 crc kubenswrapper[4729]: I0127 06:59:30.425389 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" event={"ID":"11342f63-e747-4052-a4e3-f38d99033488","Type":"ContainerStarted","Data":"bf2e89e82734e278655aeed34c3d4bffab4a7885b59b6716084233f146073ff9"} Jan 27 06:59:31 crc kubenswrapper[4729]: I0127 06:59:31.087338 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:59:31 crc kubenswrapper[4729]: I0127 06:59:31.087421 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:59:32 crc kubenswrapper[4729]: I0127 06:59:32.440940 4729 generic.go:334] "Generic (PLEG): container finished" podID="11342f63-e747-4052-a4e3-f38d99033488" containerID="728b0b909adb816f8b14395be065f873354fa5bd5d0ffdd75cafed4a69f8355f" exitCode=0 Jan 27 06:59:32 crc kubenswrapper[4729]: I0127 06:59:32.441006 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" event={"ID":"11342f63-e747-4052-a4e3-f38d99033488","Type":"ContainerDied","Data":"728b0b909adb816f8b14395be065f873354fa5bd5d0ffdd75cafed4a69f8355f"} Jan 27 06:59:33 crc kubenswrapper[4729]: I0127 06:59:33.454411 4729 generic.go:334] "Generic (PLEG): container finished" podID="11342f63-e747-4052-a4e3-f38d99033488" containerID="ff348c7c0115c88f88b5f6274cedce96485697fad26e5e7b20612cb1dc5fc4d2" exitCode=0 Jan 27 06:59:33 crc kubenswrapper[4729]: I0127 06:59:33.454465 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" event={"ID":"11342f63-e747-4052-a4e3-f38d99033488","Type":"ContainerDied","Data":"ff348c7c0115c88f88b5f6274cedce96485697fad26e5e7b20612cb1dc5fc4d2"} Jan 27 06:59:34 crc kubenswrapper[4729]: I0127 06:59:34.703945 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:34 crc kubenswrapper[4729]: I0127 06:59:34.878013 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-util\") pod \"11342f63-e747-4052-a4e3-f38d99033488\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " Jan 27 06:59:34 crc kubenswrapper[4729]: I0127 06:59:34.878257 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-bundle\") pod \"11342f63-e747-4052-a4e3-f38d99033488\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " Jan 27 06:59:34 crc kubenswrapper[4729]: I0127 06:59:34.878489 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjtsx\" (UniqueName: \"kubernetes.io/projected/11342f63-e747-4052-a4e3-f38d99033488-kube-api-access-mjtsx\") pod \"11342f63-e747-4052-a4e3-f38d99033488\" (UID: \"11342f63-e747-4052-a4e3-f38d99033488\") " Jan 27 06:59:34 crc kubenswrapper[4729]: I0127 06:59:34.879835 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-bundle" (OuterVolumeSpecName: "bundle") pod "11342f63-e747-4052-a4e3-f38d99033488" (UID: "11342f63-e747-4052-a4e3-f38d99033488"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:34 crc kubenswrapper[4729]: I0127 06:59:34.888066 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11342f63-e747-4052-a4e3-f38d99033488-kube-api-access-mjtsx" (OuterVolumeSpecName: "kube-api-access-mjtsx") pod "11342f63-e747-4052-a4e3-f38d99033488" (UID: "11342f63-e747-4052-a4e3-f38d99033488"). InnerVolumeSpecName "kube-api-access-mjtsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:34 crc kubenswrapper[4729]: I0127 06:59:34.980183 4729 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:34 crc kubenswrapper[4729]: I0127 06:59:34.980221 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjtsx\" (UniqueName: \"kubernetes.io/projected/11342f63-e747-4052-a4e3-f38d99033488-kube-api-access-mjtsx\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:35 crc kubenswrapper[4729]: I0127 06:59:35.162587 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-util" (OuterVolumeSpecName: "util") pod "11342f63-e747-4052-a4e3-f38d99033488" (UID: "11342f63-e747-4052-a4e3-f38d99033488"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:35 crc kubenswrapper[4729]: I0127 06:59:35.182542 4729 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11342f63-e747-4052-a4e3-f38d99033488-util\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:35 crc kubenswrapper[4729]: I0127 06:59:35.469507 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" event={"ID":"11342f63-e747-4052-a4e3-f38d99033488","Type":"ContainerDied","Data":"bf2e89e82734e278655aeed34c3d4bffab4a7885b59b6716084233f146073ff9"} Jan 27 06:59:35 crc kubenswrapper[4729]: I0127 06:59:35.469587 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf2e89e82734e278655aeed34c3d4bffab4a7885b59b6716084233f146073ff9" Jan 27 06:59:35 crc kubenswrapper[4729]: I0127 06:59:35.469708 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs" Jan 27 06:59:35 crc kubenswrapper[4729]: E0127 06:59:35.603532 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11342f63_e747_4052_a4e3_f38d99033488.slice\": RecentStats: unable to find data in memory cache]" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.291623 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-94b9n"] Jan 27 06:59:37 crc kubenswrapper[4729]: E0127 06:59:37.292024 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11342f63-e747-4052-a4e3-f38d99033488" containerName="extract" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.292035 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="11342f63-e747-4052-a4e3-f38d99033488" containerName="extract" Jan 27 06:59:37 crc kubenswrapper[4729]: E0127 06:59:37.292046 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11342f63-e747-4052-a4e3-f38d99033488" containerName="util" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.292051 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="11342f63-e747-4052-a4e3-f38d99033488" containerName="util" Jan 27 06:59:37 crc kubenswrapper[4729]: E0127 06:59:37.292061 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11342f63-e747-4052-a4e3-f38d99033488" containerName="pull" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.292080 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="11342f63-e747-4052-a4e3-f38d99033488" containerName="pull" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.292167 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="11342f63-e747-4052-a4e3-f38d99033488" containerName="extract" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.292499 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-94b9n" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.294316 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.294359 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.301489 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-94b9n"] Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.303761 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-cvqm9" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.413908 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h26g\" (UniqueName: \"kubernetes.io/projected/c1f989f4-8cae-498d-99ec-fcb530e3933a-kube-api-access-2h26g\") pod \"nmstate-operator-646758c888-94b9n\" (UID: \"c1f989f4-8cae-498d-99ec-fcb530e3933a\") " pod="openshift-nmstate/nmstate-operator-646758c888-94b9n" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.515566 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h26g\" (UniqueName: \"kubernetes.io/projected/c1f989f4-8cae-498d-99ec-fcb530e3933a-kube-api-access-2h26g\") pod \"nmstate-operator-646758c888-94b9n\" (UID: \"c1f989f4-8cae-498d-99ec-fcb530e3933a\") " pod="openshift-nmstate/nmstate-operator-646758c888-94b9n" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.551609 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h26g\" (UniqueName: \"kubernetes.io/projected/c1f989f4-8cae-498d-99ec-fcb530e3933a-kube-api-access-2h26g\") pod \"nmstate-operator-646758c888-94b9n\" (UID: \"c1f989f4-8cae-498d-99ec-fcb530e3933a\") " pod="openshift-nmstate/nmstate-operator-646758c888-94b9n" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.606156 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-94b9n" Jan 27 06:59:37 crc kubenswrapper[4729]: I0127 06:59:37.807836 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-94b9n"] Jan 27 06:59:38 crc kubenswrapper[4729]: I0127 06:59:38.487639 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-94b9n" event={"ID":"c1f989f4-8cae-498d-99ec-fcb530e3933a","Type":"ContainerStarted","Data":"d47150ffa9a596d806221265b1e0dfda97632853d3d76236197f60fbaf29d13f"} Jan 27 06:59:41 crc kubenswrapper[4729]: I0127 06:59:41.514819 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-94b9n" event={"ID":"c1f989f4-8cae-498d-99ec-fcb530e3933a","Type":"ContainerStarted","Data":"8d8d8b7abb14b4d02d1fc5c9a23ba219faa1f48bc396208faa1a90c607da7044"} Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.566422 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-94b9n" podStartSLOduration=2.683250404 podStartE2EDuration="5.56639334s" podCreationTimestamp="2026-01-27 06:59:37 +0000 UTC" firstStartedPulling="2026-01-27 06:59:37.823369323 +0000 UTC m=+742.890490596" lastFinishedPulling="2026-01-27 06:59:40.706512269 +0000 UTC m=+745.773633532" observedRunningTime="2026-01-27 06:59:41.545863573 +0000 UTC m=+746.612984876" watchObservedRunningTime="2026-01-27 06:59:42.56639334 +0000 UTC m=+747.633514633" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.572175 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mpwfg"] Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.573617 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mpwfg" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.577525 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-lfj6s" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.591152 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mpwfg"] Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.592198 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5wxl\" (UniqueName: \"kubernetes.io/projected/289f2f6a-405b-4f18-a09b-80f4ad0b4f32-kube-api-access-g5wxl\") pod \"nmstate-metrics-54757c584b-mpwfg\" (UID: \"289f2f6a-405b-4f18-a09b-80f4ad0b4f32\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mpwfg" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.619225 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj"] Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.619845 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.625431 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.634377 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj"] Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.656986 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qgfkk"] Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.657852 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.692756 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbm2\" (UniqueName: \"kubernetes.io/projected/bf8073cf-8829-412d-a338-f198333488f2-kube-api-access-rbbm2\") pod \"nmstate-webhook-8474b5b9d8-k6zqj\" (UID: \"bf8073cf-8829-412d-a338-f198333488f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.692801 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bf8073cf-8829-412d-a338-f198333488f2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k6zqj\" (UID: \"bf8073cf-8829-412d-a338-f198333488f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.692823 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6544f\" (UniqueName: \"kubernetes.io/projected/329d538c-94a1-4eec-bf1f-0a867d6f8db1-kube-api-access-6544f\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.692840 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/329d538c-94a1-4eec-bf1f-0a867d6f8db1-dbus-socket\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.692870 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/329d538c-94a1-4eec-bf1f-0a867d6f8db1-nmstate-lock\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.692905 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5wxl\" (UniqueName: \"kubernetes.io/projected/289f2f6a-405b-4f18-a09b-80f4ad0b4f32-kube-api-access-g5wxl\") pod \"nmstate-metrics-54757c584b-mpwfg\" (UID: \"289f2f6a-405b-4f18-a09b-80f4ad0b4f32\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mpwfg" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.692931 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/329d538c-94a1-4eec-bf1f-0a867d6f8db1-ovs-socket\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.731637 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5wxl\" (UniqueName: \"kubernetes.io/projected/289f2f6a-405b-4f18-a09b-80f4ad0b4f32-kube-api-access-g5wxl\") pod \"nmstate-metrics-54757c584b-mpwfg\" (UID: \"289f2f6a-405b-4f18-a09b-80f4ad0b4f32\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mpwfg" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.777120 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v"] Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.777811 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.781435 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.781775 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.782058 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-tnmkd" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.783704 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v"] Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.795845 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/329d538c-94a1-4eec-bf1f-0a867d6f8db1-ovs-socket\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.795898 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbbm2\" (UniqueName: \"kubernetes.io/projected/bf8073cf-8829-412d-a338-f198333488f2-kube-api-access-rbbm2\") pod \"nmstate-webhook-8474b5b9d8-k6zqj\" (UID: \"bf8073cf-8829-412d-a338-f198333488f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.795922 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bf8073cf-8829-412d-a338-f198333488f2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k6zqj\" (UID: \"bf8073cf-8829-412d-a338-f198333488f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.795994 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/329d538c-94a1-4eec-bf1f-0a867d6f8db1-ovs-socket\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: E0127 06:59:42.796014 4729 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 06:59:42 crc kubenswrapper[4729]: E0127 06:59:42.796062 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf8073cf-8829-412d-a338-f198333488f2-tls-key-pair podName:bf8073cf-8829-412d-a338-f198333488f2 nodeName:}" failed. No retries permitted until 2026-01-27 06:59:43.296043714 +0000 UTC m=+748.363164977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/bf8073cf-8829-412d-a338-f198333488f2-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-k6zqj" (UID: "bf8073cf-8829-412d-a338-f198333488f2") : secret "openshift-nmstate-webhook" not found Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.796175 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/329d538c-94a1-4eec-bf1f-0a867d6f8db1-dbus-socket\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.796209 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6544f\" (UniqueName: \"kubernetes.io/projected/329d538c-94a1-4eec-bf1f-0a867d6f8db1-kube-api-access-6544f\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.796279 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/329d538c-94a1-4eec-bf1f-0a867d6f8db1-nmstate-lock\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.796383 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/329d538c-94a1-4eec-bf1f-0a867d6f8db1-nmstate-lock\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.796522 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/329d538c-94a1-4eec-bf1f-0a867d6f8db1-dbus-socket\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.826632 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbbm2\" (UniqueName: \"kubernetes.io/projected/bf8073cf-8829-412d-a338-f198333488f2-kube-api-access-rbbm2\") pod \"nmstate-webhook-8474b5b9d8-k6zqj\" (UID: \"bf8073cf-8829-412d-a338-f198333488f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.827212 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6544f\" (UniqueName: \"kubernetes.io/projected/329d538c-94a1-4eec-bf1f-0a867d6f8db1-kube-api-access-6544f\") pod \"nmstate-handler-qgfkk\" (UID: \"329d538c-94a1-4eec-bf1f-0a867d6f8db1\") " pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.888200 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mpwfg" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.897702 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e2954439-230c-448d-bb20-d1a458a99432-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wd76v\" (UID: \"e2954439-230c-448d-bb20-d1a458a99432\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.897836 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2954439-230c-448d-bb20-d1a458a99432-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wd76v\" (UID: \"e2954439-230c-448d-bb20-d1a458a99432\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.897951 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5rc\" (UniqueName: \"kubernetes.io/projected/e2954439-230c-448d-bb20-d1a458a99432-kube-api-access-gf5rc\") pod \"nmstate-console-plugin-7754f76f8b-wd76v\" (UID: \"e2954439-230c-448d-bb20-d1a458a99432\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.965663 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-799976d686-bkfl2"] Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.966291 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.983329 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.999833 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-trusted-ca-bundle\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.999897 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mscx\" (UniqueName: \"kubernetes.io/projected/068c4b95-38bf-4fd2-94d7-e81ab3af2738-kube-api-access-5mscx\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:42 crc kubenswrapper[4729]: I0127 06:59:42.999929 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2954439-230c-448d-bb20-d1a458a99432-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wd76v\" (UID: \"e2954439-230c-448d-bb20-d1a458a99432\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:42.999993 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-oauth-serving-cert\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.000150 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-console-config\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.000186 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf5rc\" (UniqueName: \"kubernetes.io/projected/e2954439-230c-448d-bb20-d1a458a99432-kube-api-access-gf5rc\") pod \"nmstate-console-plugin-7754f76f8b-wd76v\" (UID: \"e2954439-230c-448d-bb20-d1a458a99432\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.000228 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/068c4b95-38bf-4fd2-94d7-e81ab3af2738-console-serving-cert\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.000265 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e2954439-230c-448d-bb20-d1a458a99432-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wd76v\" (UID: \"e2954439-230c-448d-bb20-d1a458a99432\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.000382 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-service-ca\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.000446 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/068c4b95-38bf-4fd2-94d7-e81ab3af2738-console-oauth-config\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.001523 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-799976d686-bkfl2"] Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.017269 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e2954439-230c-448d-bb20-d1a458a99432-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wd76v\" (UID: \"e2954439-230c-448d-bb20-d1a458a99432\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.022221 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2954439-230c-448d-bb20-d1a458a99432-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wd76v\" (UID: \"e2954439-230c-448d-bb20-d1a458a99432\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.028084 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf5rc\" (UniqueName: \"kubernetes.io/projected/e2954439-230c-448d-bb20-d1a458a99432-kube-api-access-gf5rc\") pod \"nmstate-console-plugin-7754f76f8b-wd76v\" (UID: \"e2954439-230c-448d-bb20-d1a458a99432\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.104785 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.104964 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/068c4b95-38bf-4fd2-94d7-e81ab3af2738-console-serving-cert\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.105019 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-service-ca\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.105044 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/068c4b95-38bf-4fd2-94d7-e81ab3af2738-console-oauth-config\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.105063 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-trusted-ca-bundle\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.105098 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mscx\" (UniqueName: \"kubernetes.io/projected/068c4b95-38bf-4fd2-94d7-e81ab3af2738-kube-api-access-5mscx\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.105138 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-oauth-serving-cert\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.105174 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-console-config\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.106032 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-console-config\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.107056 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-service-ca\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.109210 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-oauth-serving-cert\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.112463 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/068c4b95-38bf-4fd2-94d7-e81ab3af2738-console-serving-cert\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.113016 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/068c4b95-38bf-4fd2-94d7-e81ab3af2738-trusted-ca-bundle\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.122690 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/068c4b95-38bf-4fd2-94d7-e81ab3af2738-console-oauth-config\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.126680 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mscx\" (UniqueName: \"kubernetes.io/projected/068c4b95-38bf-4fd2-94d7-e81ab3af2738-kube-api-access-5mscx\") pod \"console-799976d686-bkfl2\" (UID: \"068c4b95-38bf-4fd2-94d7-e81ab3af2738\") " pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.177056 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mpwfg"] Jan 27 06:59:43 crc kubenswrapper[4729]: W0127 06:59:43.186303 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod289f2f6a_405b_4f18_a09b_80f4ad0b4f32.slice/crio-b0c2a323e1f2c347369aae2afbcd685fdf559588468bcbb361e1ac36f0dda410 WatchSource:0}: Error finding container b0c2a323e1f2c347369aae2afbcd685fdf559588468bcbb361e1ac36f0dda410: Status 404 returned error can't find the container with id b0c2a323e1f2c347369aae2afbcd685fdf559588468bcbb361e1ac36f0dda410 Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.293019 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.310841 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bf8073cf-8829-412d-a338-f198333488f2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k6zqj\" (UID: \"bf8073cf-8829-412d-a338-f198333488f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.315460 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v"] Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.316710 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bf8073cf-8829-412d-a338-f198333488f2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k6zqj\" (UID: \"bf8073cf-8829-412d-a338-f198333488f2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 06:59:43 crc kubenswrapper[4729]: W0127 06:59:43.323861 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2954439_230c_448d_bb20_d1a458a99432.slice/crio-3cde18fe26524c06b9f8f9b06b121727557f2b7a9008c8e878cad3d8fe524c9d WatchSource:0}: Error finding container 3cde18fe26524c06b9f8f9b06b121727557f2b7a9008c8e878cad3d8fe524c9d: Status 404 returned error can't find the container with id 3cde18fe26524c06b9f8f9b06b121727557f2b7a9008c8e878cad3d8fe524c9d Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.506513 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-799976d686-bkfl2"] Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.536667 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.538753 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mpwfg" event={"ID":"289f2f6a-405b-4f18-a09b-80f4ad0b4f32","Type":"ContainerStarted","Data":"b0c2a323e1f2c347369aae2afbcd685fdf559588468bcbb361e1ac36f0dda410"} Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.540346 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" event={"ID":"e2954439-230c-448d-bb20-d1a458a99432","Type":"ContainerStarted","Data":"3cde18fe26524c06b9f8f9b06b121727557f2b7a9008c8e878cad3d8fe524c9d"} Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.541585 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qgfkk" event={"ID":"329d538c-94a1-4eec-bf1f-0a867d6f8db1","Type":"ContainerStarted","Data":"a0af42264219f2f5ea80a54a517d55418355c20918f1a51436c373ecf274fa72"} Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.542552 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-799976d686-bkfl2" event={"ID":"068c4b95-38bf-4fd2-94d7-e81ab3af2738","Type":"ContainerStarted","Data":"f5dda46d183f188b942920c41163647b3feabe01b0c051561280aea3d34c1b4e"} Jan 27 06:59:43 crc kubenswrapper[4729]: I0127 06:59:43.962804 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj"] Jan 27 06:59:43 crc kubenswrapper[4729]: W0127 06:59:43.971951 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf8073cf_8829_412d_a338_f198333488f2.slice/crio-3857a913694a1b039a826deb275a5bc55a07fd896f0e02a67aad28a3b3ab7282 WatchSource:0}: Error finding container 3857a913694a1b039a826deb275a5bc55a07fd896f0e02a67aad28a3b3ab7282: Status 404 returned error can't find the container with id 3857a913694a1b039a826deb275a5bc55a07fd896f0e02a67aad28a3b3ab7282 Jan 27 06:59:44 crc kubenswrapper[4729]: I0127 06:59:44.550599 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-799976d686-bkfl2" event={"ID":"068c4b95-38bf-4fd2-94d7-e81ab3af2738","Type":"ContainerStarted","Data":"af81ebcdf662799384ef84d92dc3b9b55d090f582ed0e9812110c6ae84fec5f0"} Jan 27 06:59:44 crc kubenswrapper[4729]: I0127 06:59:44.551759 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" event={"ID":"bf8073cf-8829-412d-a338-f198333488f2","Type":"ContainerStarted","Data":"3857a913694a1b039a826deb275a5bc55a07fd896f0e02a67aad28a3b3ab7282"} Jan 27 06:59:44 crc kubenswrapper[4729]: I0127 06:59:44.571307 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-799976d686-bkfl2" podStartSLOduration=2.571282288 podStartE2EDuration="2.571282288s" podCreationTimestamp="2026-01-27 06:59:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:59:44.564470896 +0000 UTC m=+749.631592189" watchObservedRunningTime="2026-01-27 06:59:44.571282288 +0000 UTC m=+749.638403551" Jan 27 06:59:46 crc kubenswrapper[4729]: I0127 06:59:46.564907 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" event={"ID":"e2954439-230c-448d-bb20-d1a458a99432","Type":"ContainerStarted","Data":"222a3c0f3be7a84c8343593da7d50fa23ea160954504a9069efbfc9d14dfa668"} Jan 27 06:59:46 crc kubenswrapper[4729]: I0127 06:59:46.567447 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qgfkk" event={"ID":"329d538c-94a1-4eec-bf1f-0a867d6f8db1","Type":"ContainerStarted","Data":"eb579378a2fc7f79df1a043d4306bd20987ccc632c944dba9dfab9eeaf0f4173"} Jan 27 06:59:46 crc kubenswrapper[4729]: I0127 06:59:46.567842 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:46 crc kubenswrapper[4729]: I0127 06:59:46.569595 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" event={"ID":"bf8073cf-8829-412d-a338-f198333488f2","Type":"ContainerStarted","Data":"866e8e15d85ee99eb6460da999f1bdce64958740eda25384ab25aeb9d09ca284"} Jan 27 06:59:46 crc kubenswrapper[4729]: I0127 06:59:46.570011 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 06:59:46 crc kubenswrapper[4729]: I0127 06:59:46.571254 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mpwfg" event={"ID":"289f2f6a-405b-4f18-a09b-80f4ad0b4f32","Type":"ContainerStarted","Data":"37541f4c497cb3bb8329e0e8e4ef010985357ab1614483558dfcde575b802c71"} Jan 27 06:59:46 crc kubenswrapper[4729]: I0127 06:59:46.580288 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wd76v" podStartSLOduration=1.9881031770000002 podStartE2EDuration="4.580278198s" podCreationTimestamp="2026-01-27 06:59:42 +0000 UTC" firstStartedPulling="2026-01-27 06:59:43.325897483 +0000 UTC m=+748.393018746" lastFinishedPulling="2026-01-27 06:59:45.918072504 +0000 UTC m=+750.985193767" observedRunningTime="2026-01-27 06:59:46.579423801 +0000 UTC m=+751.646545084" watchObservedRunningTime="2026-01-27 06:59:46.580278198 +0000 UTC m=+751.647399461" Jan 27 06:59:46 crc kubenswrapper[4729]: I0127 06:59:46.596330 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" podStartSLOduration=2.637389283 podStartE2EDuration="4.596318578s" podCreationTimestamp="2026-01-27 06:59:42 +0000 UTC" firstStartedPulling="2026-01-27 06:59:43.975313683 +0000 UTC m=+749.042434976" lastFinishedPulling="2026-01-27 06:59:45.934242968 +0000 UTC m=+751.001364271" observedRunningTime="2026-01-27 06:59:46.596114383 +0000 UTC m=+751.663235656" watchObservedRunningTime="2026-01-27 06:59:46.596318578 +0000 UTC m=+751.663439841" Jan 27 06:59:46 crc kubenswrapper[4729]: I0127 06:59:46.614045 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qgfkk" podStartSLOduration=1.708756121 podStartE2EDuration="4.614025114s" podCreationTimestamp="2026-01-27 06:59:42 +0000 UTC" firstStartedPulling="2026-01-27 06:59:43.034378691 +0000 UTC m=+748.101499944" lastFinishedPulling="2026-01-27 06:59:45.939647674 +0000 UTC m=+751.006768937" observedRunningTime="2026-01-27 06:59:46.612718582 +0000 UTC m=+751.679839855" watchObservedRunningTime="2026-01-27 06:59:46.614025114 +0000 UTC m=+751.681146387" Jan 27 06:59:48 crc kubenswrapper[4729]: I0127 06:59:48.585581 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mpwfg" event={"ID":"289f2f6a-405b-4f18-a09b-80f4ad0b4f32","Type":"ContainerStarted","Data":"3e50dd7a6c3797fe060a0d3b0317588e3aed632c8688dd2925752a617324b6e1"} Jan 27 06:59:53 crc kubenswrapper[4729]: I0127 06:59:53.011444 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qgfkk" Jan 27 06:59:53 crc kubenswrapper[4729]: I0127 06:59:53.035173 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-mpwfg" podStartSLOduration=5.995125636 podStartE2EDuration="11.035150921s" podCreationTimestamp="2026-01-27 06:59:42 +0000 UTC" firstStartedPulling="2026-01-27 06:59:43.188636418 +0000 UTC m=+748.255757681" lastFinishedPulling="2026-01-27 06:59:48.228661663 +0000 UTC m=+753.295782966" observedRunningTime="2026-01-27 06:59:48.615406657 +0000 UTC m=+753.682527930" watchObservedRunningTime="2026-01-27 06:59:53.035150921 +0000 UTC m=+758.102272194" Jan 27 06:59:53 crc kubenswrapper[4729]: I0127 06:59:53.293659 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:53 crc kubenswrapper[4729]: I0127 06:59:53.294018 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:53 crc kubenswrapper[4729]: I0127 06:59:53.305159 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:53 crc kubenswrapper[4729]: I0127 06:59:53.628303 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-799976d686-bkfl2" Jan 27 06:59:53 crc kubenswrapper[4729]: I0127 06:59:53.699549 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gw87z"] Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.175501 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc"] Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.180824 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.185592 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.185708 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc"] Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.187636 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.244784 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fed4a56-1387-4eba-82f7-e84e528d735a-config-volume\") pod \"collect-profiles-29491620-wprkc\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.245240 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76qht\" (UniqueName: \"kubernetes.io/projected/4fed4a56-1387-4eba-82f7-e84e528d735a-kube-api-access-76qht\") pod \"collect-profiles-29491620-wprkc\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.245395 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fed4a56-1387-4eba-82f7-e84e528d735a-secret-volume\") pod \"collect-profiles-29491620-wprkc\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.346597 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fed4a56-1387-4eba-82f7-e84e528d735a-secret-volume\") pod \"collect-profiles-29491620-wprkc\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.346672 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fed4a56-1387-4eba-82f7-e84e528d735a-config-volume\") pod \"collect-profiles-29491620-wprkc\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.346724 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76qht\" (UniqueName: \"kubernetes.io/projected/4fed4a56-1387-4eba-82f7-e84e528d735a-kube-api-access-76qht\") pod \"collect-profiles-29491620-wprkc\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.348397 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fed4a56-1387-4eba-82f7-e84e528d735a-config-volume\") pod \"collect-profiles-29491620-wprkc\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.362481 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fed4a56-1387-4eba-82f7-e84e528d735a-secret-volume\") pod \"collect-profiles-29491620-wprkc\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.367567 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76qht\" (UniqueName: \"kubernetes.io/projected/4fed4a56-1387-4eba-82f7-e84e528d735a-kube-api-access-76qht\") pod \"collect-profiles-29491620-wprkc\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.505437 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:00 crc kubenswrapper[4729]: I0127 07:00:00.775925 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc"] Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.088006 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.088086 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.088142 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.088734 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68bfd80bb6423f08d69b967fb2f7f062eecb3f011230ca94aaaa763bc9d7775b"} pod="openshift-machine-config-operator/machine-config-daemon-5x25t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.088805 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" containerID="cri-o://68bfd80bb6423f08d69b967fb2f7f062eecb3f011230ca94aaaa763bc9d7775b" gracePeriod=600 Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.687164 4729 generic.go:334] "Generic (PLEG): container finished" podID="4fed4a56-1387-4eba-82f7-e84e528d735a" containerID="0491ea24bc6bfb82ffa9a7191dca0fb157ca6211d09cca6be26ca29a62ab77e7" exitCode=0 Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.687220 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" event={"ID":"4fed4a56-1387-4eba-82f7-e84e528d735a","Type":"ContainerDied","Data":"0491ea24bc6bfb82ffa9a7191dca0fb157ca6211d09cca6be26ca29a62ab77e7"} Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.687490 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" event={"ID":"4fed4a56-1387-4eba-82f7-e84e528d735a","Type":"ContainerStarted","Data":"eed713d6b7495c9af7d751a79f296d39abbe2dd3aa07010bae16c39c10573f38"} Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.690021 4729 generic.go:334] "Generic (PLEG): container finished" podID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerID="68bfd80bb6423f08d69b967fb2f7f062eecb3f011230ca94aaaa763bc9d7775b" exitCode=0 Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.690096 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerDied","Data":"68bfd80bb6423f08d69b967fb2f7f062eecb3f011230ca94aaaa763bc9d7775b"} Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.690193 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerStarted","Data":"1461a2d53f2bcef42ef23fcff0f45288da459bca413167e7db07cd4d0b094c40"} Jan 27 07:00:01 crc kubenswrapper[4729]: I0127 07:00:01.690235 4729 scope.go:117] "RemoveContainer" containerID="5bf0ca49a300f807dd9bcd1c99b9059912b76c5c995d8c8ce71426666ac706c5" Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.003941 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.193478 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fed4a56-1387-4eba-82f7-e84e528d735a-secret-volume\") pod \"4fed4a56-1387-4eba-82f7-e84e528d735a\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.193658 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fed4a56-1387-4eba-82f7-e84e528d735a-config-volume\") pod \"4fed4a56-1387-4eba-82f7-e84e528d735a\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.193706 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76qht\" (UniqueName: \"kubernetes.io/projected/4fed4a56-1387-4eba-82f7-e84e528d735a-kube-api-access-76qht\") pod \"4fed4a56-1387-4eba-82f7-e84e528d735a\" (UID: \"4fed4a56-1387-4eba-82f7-e84e528d735a\") " Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.195632 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fed4a56-1387-4eba-82f7-e84e528d735a-config-volume" (OuterVolumeSpecName: "config-volume") pod "4fed4a56-1387-4eba-82f7-e84e528d735a" (UID: "4fed4a56-1387-4eba-82f7-e84e528d735a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.203266 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fed4a56-1387-4eba-82f7-e84e528d735a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4fed4a56-1387-4eba-82f7-e84e528d735a" (UID: "4fed4a56-1387-4eba-82f7-e84e528d735a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.206249 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fed4a56-1387-4eba-82f7-e84e528d735a-kube-api-access-76qht" (OuterVolumeSpecName: "kube-api-access-76qht") pod "4fed4a56-1387-4eba-82f7-e84e528d735a" (UID: "4fed4a56-1387-4eba-82f7-e84e528d735a"). InnerVolumeSpecName "kube-api-access-76qht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.295889 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76qht\" (UniqueName: \"kubernetes.io/projected/4fed4a56-1387-4eba-82f7-e84e528d735a-kube-api-access-76qht\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.295938 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fed4a56-1387-4eba-82f7-e84e528d735a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.295958 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fed4a56-1387-4eba-82f7-e84e528d735a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.543782 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k6zqj" Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.709023 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" event={"ID":"4fed4a56-1387-4eba-82f7-e84e528d735a","Type":"ContainerDied","Data":"eed713d6b7495c9af7d751a79f296d39abbe2dd3aa07010bae16c39c10573f38"} Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.709079 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed713d6b7495c9af7d751a79f296d39abbe2dd3aa07010bae16c39c10573f38" Jan 27 07:00:03 crc kubenswrapper[4729]: I0127 07:00:03.709469 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-wprkc" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.155494 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n59rz"] Jan 27 07:00:10 crc kubenswrapper[4729]: E0127 07:00:10.156377 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fed4a56-1387-4eba-82f7-e84e528d735a" containerName="collect-profiles" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.156393 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fed4a56-1387-4eba-82f7-e84e528d735a" containerName="collect-profiles" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.156516 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fed4a56-1387-4eba-82f7-e84e528d735a" containerName="collect-profiles" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.157564 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.168525 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n59rz"] Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.186871 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rfd9\" (UniqueName: \"kubernetes.io/projected/b30c6af0-ba20-4257-a0cd-561fca708a60-kube-api-access-6rfd9\") pod \"certified-operators-n59rz\" (UID: \"b30c6af0-ba20-4257-a0cd-561fca708a60\") " pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.186953 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b30c6af0-ba20-4257-a0cd-561fca708a60-utilities\") pod \"certified-operators-n59rz\" (UID: \"b30c6af0-ba20-4257-a0cd-561fca708a60\") " pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.187008 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b30c6af0-ba20-4257-a0cd-561fca708a60-catalog-content\") pod \"certified-operators-n59rz\" (UID: \"b30c6af0-ba20-4257-a0cd-561fca708a60\") " pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.287651 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b30c6af0-ba20-4257-a0cd-561fca708a60-catalog-content\") pod \"certified-operators-n59rz\" (UID: \"b30c6af0-ba20-4257-a0cd-561fca708a60\") " pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.287727 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rfd9\" (UniqueName: \"kubernetes.io/projected/b30c6af0-ba20-4257-a0cd-561fca708a60-kube-api-access-6rfd9\") pod \"certified-operators-n59rz\" (UID: \"b30c6af0-ba20-4257-a0cd-561fca708a60\") " pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.287748 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b30c6af0-ba20-4257-a0cd-561fca708a60-utilities\") pod \"certified-operators-n59rz\" (UID: \"b30c6af0-ba20-4257-a0cd-561fca708a60\") " pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.288220 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b30c6af0-ba20-4257-a0cd-561fca708a60-utilities\") pod \"certified-operators-n59rz\" (UID: \"b30c6af0-ba20-4257-a0cd-561fca708a60\") " pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.288489 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b30c6af0-ba20-4257-a0cd-561fca708a60-catalog-content\") pod \"certified-operators-n59rz\" (UID: \"b30c6af0-ba20-4257-a0cd-561fca708a60\") " pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.313460 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rfd9\" (UniqueName: \"kubernetes.io/projected/b30c6af0-ba20-4257-a0cd-561fca708a60-kube-api-access-6rfd9\") pod \"certified-operators-n59rz\" (UID: \"b30c6af0-ba20-4257-a0cd-561fca708a60\") " pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.487354 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:10 crc kubenswrapper[4729]: I0127 07:00:10.783278 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n59rz"] Jan 27 07:00:11 crc kubenswrapper[4729]: I0127 07:00:11.765556 4729 generic.go:334] "Generic (PLEG): container finished" podID="b30c6af0-ba20-4257-a0cd-561fca708a60" containerID="9d43ad06d6ea70b5d9eb3f8eb298c8379b720ee2d6d7d1cc2f7d644921f4856e" exitCode=0 Jan 27 07:00:11 crc kubenswrapper[4729]: I0127 07:00:11.765691 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n59rz" event={"ID":"b30c6af0-ba20-4257-a0cd-561fca708a60","Type":"ContainerDied","Data":"9d43ad06d6ea70b5d9eb3f8eb298c8379b720ee2d6d7d1cc2f7d644921f4856e"} Jan 27 07:00:11 crc kubenswrapper[4729]: I0127 07:00:11.766135 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n59rz" event={"ID":"b30c6af0-ba20-4257-a0cd-561fca708a60","Type":"ContainerStarted","Data":"38c71863f5feb1e70ab1115fd899f3c6f7c7ea1928c208b533692af8fb9fad12"} Jan 27 07:00:12 crc kubenswrapper[4729]: I0127 07:00:12.088062 4729 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.729802 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj"] Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.731506 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.734906 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.744394 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj"] Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.806592 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n59rz" event={"ID":"b30c6af0-ba20-4257-a0cd-561fca708a60","Type":"ContainerStarted","Data":"bc8b9e69d7c5e4f3f7f6a12911a735babe9c1d8ade44e9ca4b6ad4ad3886d25e"} Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.810106 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.810166 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m766p\" (UniqueName: \"kubernetes.io/projected/6246b5cc-e7d5-4791-b188-7a4bb601ac73-kube-api-access-m766p\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.810186 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.911243 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m766p\" (UniqueName: \"kubernetes.io/projected/6246b5cc-e7d5-4791-b188-7a4bb601ac73-kube-api-access-m766p\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.911317 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.911455 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.912095 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.912175 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:16 crc kubenswrapper[4729]: I0127 07:00:16.928621 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m766p\" (UniqueName: \"kubernetes.io/projected/6246b5cc-e7d5-4791-b188-7a4bb601ac73-kube-api-access-m766p\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:17 crc kubenswrapper[4729]: I0127 07:00:17.050050 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:17 crc kubenswrapper[4729]: I0127 07:00:17.360303 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj"] Jan 27 07:00:17 crc kubenswrapper[4729]: I0127 07:00:17.815232 4729 generic.go:334] "Generic (PLEG): container finished" podID="b30c6af0-ba20-4257-a0cd-561fca708a60" containerID="bc8b9e69d7c5e4f3f7f6a12911a735babe9c1d8ade44e9ca4b6ad4ad3886d25e" exitCode=0 Jan 27 07:00:17 crc kubenswrapper[4729]: I0127 07:00:17.815300 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n59rz" event={"ID":"b30c6af0-ba20-4257-a0cd-561fca708a60","Type":"ContainerDied","Data":"bc8b9e69d7c5e4f3f7f6a12911a735babe9c1d8ade44e9ca4b6ad4ad3886d25e"} Jan 27 07:00:17 crc kubenswrapper[4729]: I0127 07:00:17.817780 4729 generic.go:334] "Generic (PLEG): container finished" podID="6246b5cc-e7d5-4791-b188-7a4bb601ac73" containerID="f64e3d0712375e6412d7ad6e0988dfd9c27e56f315d1ba433209d65b6a74f8c6" exitCode=0 Jan 27 07:00:17 crc kubenswrapper[4729]: I0127 07:00:17.817815 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" event={"ID":"6246b5cc-e7d5-4791-b188-7a4bb601ac73","Type":"ContainerDied","Data":"f64e3d0712375e6412d7ad6e0988dfd9c27e56f315d1ba433209d65b6a74f8c6"} Jan 27 07:00:17 crc kubenswrapper[4729]: I0127 07:00:17.817845 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" event={"ID":"6246b5cc-e7d5-4791-b188-7a4bb601ac73","Type":"ContainerStarted","Data":"bfca3c8f5e6a9b70a6f1d1812e49506dccc7ee5b0e0e2250e111f584f35a3612"} Jan 27 07:00:18 crc kubenswrapper[4729]: I0127 07:00:18.739361 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-gw87z" podUID="43a6c23c-78b7-4a13-b1a4-efab2dc70130" containerName="console" containerID="cri-o://b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69" gracePeriod=15 Jan 27 07:00:18 crc kubenswrapper[4729]: I0127 07:00:18.827217 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n59rz" event={"ID":"b30c6af0-ba20-4257-a0cd-561fca708a60","Type":"ContainerStarted","Data":"50cec875c9725fdb45ac7dd6e6a5cc07d86ee5d9b6f4a5b58f2d082168ad99d1"} Jan 27 07:00:18 crc kubenswrapper[4729]: I0127 07:00:18.847436 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n59rz" podStartSLOduration=2.419901189 podStartE2EDuration="8.847420342s" podCreationTimestamp="2026-01-27 07:00:10 +0000 UTC" firstStartedPulling="2026-01-27 07:00:11.771575515 +0000 UTC m=+776.838696778" lastFinishedPulling="2026-01-27 07:00:18.199094668 +0000 UTC m=+783.266215931" observedRunningTime="2026-01-27 07:00:18.843210286 +0000 UTC m=+783.910331599" watchObservedRunningTime="2026-01-27 07:00:18.847420342 +0000 UTC m=+783.914541605" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.096170 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gw87z_43a6c23c-78b7-4a13-b1a4-efab2dc70130/console/0.log" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.096461 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gw87z" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.142236 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/43a6c23c-78b7-4a13-b1a4-efab2dc70130-kube-api-access-nrh97\") pod \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.142319 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-trusted-ca-bundle\") pod \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.142365 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-serving-cert\") pod \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.142392 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-oauth-config\") pod \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.142432 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-oauth-serving-cert\") pod \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.142473 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-service-ca\") pod \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.142493 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-config\") pod \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\" (UID: \"43a6c23c-78b7-4a13-b1a4-efab2dc70130\") " Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.143148 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43a6c23c-78b7-4a13-b1a4-efab2dc70130" (UID: "43a6c23c-78b7-4a13-b1a4-efab2dc70130"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.143220 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-config" (OuterVolumeSpecName: "console-config") pod "43a6c23c-78b7-4a13-b1a4-efab2dc70130" (UID: "43a6c23c-78b7-4a13-b1a4-efab2dc70130"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.143520 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-service-ca" (OuterVolumeSpecName: "service-ca") pod "43a6c23c-78b7-4a13-b1a4-efab2dc70130" (UID: "43a6c23c-78b7-4a13-b1a4-efab2dc70130"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.143546 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43a6c23c-78b7-4a13-b1a4-efab2dc70130" (UID: "43a6c23c-78b7-4a13-b1a4-efab2dc70130"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.147372 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a6c23c-78b7-4a13-b1a4-efab2dc70130-kube-api-access-nrh97" (OuterVolumeSpecName: "kube-api-access-nrh97") pod "43a6c23c-78b7-4a13-b1a4-efab2dc70130" (UID: "43a6c23c-78b7-4a13-b1a4-efab2dc70130"). InnerVolumeSpecName "kube-api-access-nrh97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.147585 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43a6c23c-78b7-4a13-b1a4-efab2dc70130" (UID: "43a6c23c-78b7-4a13-b1a4-efab2dc70130"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.150972 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43a6c23c-78b7-4a13-b1a4-efab2dc70130" (UID: "43a6c23c-78b7-4a13-b1a4-efab2dc70130"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.244565 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.244867 4729 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.245008 4729 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.245234 4729 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.245356 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.245470 4729 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43a6c23c-78b7-4a13-b1a4-efab2dc70130-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.245583 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/43a6c23c-78b7-4a13-b1a4-efab2dc70130-kube-api-access-nrh97\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.836819 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gw87z_43a6c23c-78b7-4a13-b1a4-efab2dc70130/console/0.log" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.836886 4729 generic.go:334] "Generic (PLEG): container finished" podID="43a6c23c-78b7-4a13-b1a4-efab2dc70130" containerID="b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69" exitCode=2 Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.836955 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gw87z" event={"ID":"43a6c23c-78b7-4a13-b1a4-efab2dc70130","Type":"ContainerDied","Data":"b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69"} Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.836985 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gw87z" event={"ID":"43a6c23c-78b7-4a13-b1a4-efab2dc70130","Type":"ContainerDied","Data":"c64666a8e048bcb51dd05f577fbd5021891c15939075692049729d84b82767f8"} Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.836978 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gw87z" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.837021 4729 scope.go:117] "RemoveContainer" containerID="b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.840528 4729 generic.go:334] "Generic (PLEG): container finished" podID="6246b5cc-e7d5-4791-b188-7a4bb601ac73" containerID="e73c1d3ec827bf867d7a63fd45dafbcaa587aba7b66437dadfc8b43cf57a289f" exitCode=0 Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.840608 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" event={"ID":"6246b5cc-e7d5-4791-b188-7a4bb601ac73","Type":"ContainerDied","Data":"e73c1d3ec827bf867d7a63fd45dafbcaa587aba7b66437dadfc8b43cf57a289f"} Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.867582 4729 scope.go:117] "RemoveContainer" containerID="b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69" Jan 27 07:00:19 crc kubenswrapper[4729]: E0127 07:00:19.868101 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69\": container with ID starting with b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69 not found: ID does not exist" containerID="b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.868164 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69"} err="failed to get container status \"b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69\": rpc error: code = NotFound desc = could not find container \"b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69\": container with ID starting with b284842fbde464d2697930dd1c2621929ccffc39b717783becc4862436938a69 not found: ID does not exist" Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.885950 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gw87z"] Jan 27 07:00:19 crc kubenswrapper[4729]: I0127 07:00:19.888278 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-gw87z"] Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.102946 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7z4kb"] Jan 27 07:00:20 crc kubenswrapper[4729]: E0127 07:00:20.103556 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a6c23c-78b7-4a13-b1a4-efab2dc70130" containerName="console" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.103577 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a6c23c-78b7-4a13-b1a4-efab2dc70130" containerName="console" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.103707 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a6c23c-78b7-4a13-b1a4-efab2dc70130" containerName="console" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.104814 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.113405 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7z4kb"] Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.153953 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-catalog-content\") pod \"redhat-operators-7z4kb\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.154019 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhzf7\" (UniqueName: \"kubernetes.io/projected/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-kube-api-access-bhzf7\") pod \"redhat-operators-7z4kb\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.154139 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-utilities\") pod \"redhat-operators-7z4kb\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.254818 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-catalog-content\") pod \"redhat-operators-7z4kb\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.254884 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhzf7\" (UniqueName: \"kubernetes.io/projected/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-kube-api-access-bhzf7\") pod \"redhat-operators-7z4kb\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.254945 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-utilities\") pod \"redhat-operators-7z4kb\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.255376 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-catalog-content\") pod \"redhat-operators-7z4kb\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.255459 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-utilities\") pod \"redhat-operators-7z4kb\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.273322 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhzf7\" (UniqueName: \"kubernetes.io/projected/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-kube-api-access-bhzf7\") pod \"redhat-operators-7z4kb\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.369853 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a6c23c-78b7-4a13-b1a4-efab2dc70130" path="/var/lib/kubelet/pods/43a6c23c-78b7-4a13-b1a4-efab2dc70130/volumes" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.453963 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.487568 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.487624 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.529348 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.709265 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7z4kb"] Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.849506 4729 generic.go:334] "Generic (PLEG): container finished" podID="6246b5cc-e7d5-4791-b188-7a4bb601ac73" containerID="2df9a3344a7825148aa64c81f8b8b9bf1ccbb0fec14cc57f510e389f85361cdc" exitCode=0 Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.849552 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" event={"ID":"6246b5cc-e7d5-4791-b188-7a4bb601ac73","Type":"ContainerDied","Data":"2df9a3344a7825148aa64c81f8b8b9bf1ccbb0fec14cc57f510e389f85361cdc"} Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.851490 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z4kb" event={"ID":"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7","Type":"ContainerStarted","Data":"371152fb65af385e25e18da290c951ddc3ed87c221ebad45ee1b80f4f524cd0e"} Jan 27 07:00:20 crc kubenswrapper[4729]: I0127 07:00:20.851520 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z4kb" event={"ID":"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7","Type":"ContainerStarted","Data":"46f0980c116c43abfc57e66a2d099c2a2695baf4763bf846fd53f5656f2d3e4e"} Jan 27 07:00:21 crc kubenswrapper[4729]: I0127 07:00:21.860109 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z4kb" event={"ID":"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7","Type":"ContainerDied","Data":"371152fb65af385e25e18da290c951ddc3ed87c221ebad45ee1b80f4f524cd0e"} Jan 27 07:00:21 crc kubenswrapper[4729]: I0127 07:00:21.860061 4729 generic.go:334] "Generic (PLEG): container finished" podID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerID="371152fb65af385e25e18da290c951ddc3ed87c221ebad45ee1b80f4f524cd0e" exitCode=0 Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.111678 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.281632 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m766p\" (UniqueName: \"kubernetes.io/projected/6246b5cc-e7d5-4791-b188-7a4bb601ac73-kube-api-access-m766p\") pod \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.281724 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-bundle\") pod \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.281768 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-util\") pod \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\" (UID: \"6246b5cc-e7d5-4791-b188-7a4bb601ac73\") " Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.284183 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-bundle" (OuterVolumeSpecName: "bundle") pod "6246b5cc-e7d5-4791-b188-7a4bb601ac73" (UID: "6246b5cc-e7d5-4791-b188-7a4bb601ac73"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.298591 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6246b5cc-e7d5-4791-b188-7a4bb601ac73-kube-api-access-m766p" (OuterVolumeSpecName: "kube-api-access-m766p") pod "6246b5cc-e7d5-4791-b188-7a4bb601ac73" (UID: "6246b5cc-e7d5-4791-b188-7a4bb601ac73"). InnerVolumeSpecName "kube-api-access-m766p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.310950 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-util" (OuterVolumeSpecName: "util") pod "6246b5cc-e7d5-4791-b188-7a4bb601ac73" (UID: "6246b5cc-e7d5-4791-b188-7a4bb601ac73"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.383913 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m766p\" (UniqueName: \"kubernetes.io/projected/6246b5cc-e7d5-4791-b188-7a4bb601ac73-kube-api-access-m766p\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.384394 4729 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.384558 4729 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6246b5cc-e7d5-4791-b188-7a4bb601ac73-util\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.872872 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" event={"ID":"6246b5cc-e7d5-4791-b188-7a4bb601ac73","Type":"ContainerDied","Data":"bfca3c8f5e6a9b70a6f1d1812e49506dccc7ee5b0e0e2250e111f584f35a3612"} Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.872910 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj" Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.872941 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfca3c8f5e6a9b70a6f1d1812e49506dccc7ee5b0e0e2250e111f584f35a3612" Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.875579 4729 generic.go:334] "Generic (PLEG): container finished" podID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerID="56901708d7e6e380d25f9fbd4f22055c99a38a00cec55d173d917d1123c6049b" exitCode=0 Jan 27 07:00:22 crc kubenswrapper[4729]: I0127 07:00:22.875610 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z4kb" event={"ID":"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7","Type":"ContainerDied","Data":"56901708d7e6e380d25f9fbd4f22055c99a38a00cec55d173d917d1123c6049b"} Jan 27 07:00:23 crc kubenswrapper[4729]: I0127 07:00:23.886544 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z4kb" event={"ID":"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7","Type":"ContainerStarted","Data":"233ca6ce85305aa954e5c99fc366d33df3fdded5497e9b7d1bf1b275dd28b3d3"} Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.455055 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.455530 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.470736 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7z4kb" podStartSLOduration=7.824170073 podStartE2EDuration="10.470716309s" podCreationTimestamp="2026-01-27 07:00:20 +0000 UTC" firstStartedPulling="2026-01-27 07:00:20.852692363 +0000 UTC m=+785.919813626" lastFinishedPulling="2026-01-27 07:00:23.499238599 +0000 UTC m=+788.566359862" observedRunningTime="2026-01-27 07:00:23.911138119 +0000 UTC m=+788.978259422" watchObservedRunningTime="2026-01-27 07:00:30.470716309 +0000 UTC m=+795.537837572" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.472800 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn"] Jan 27 07:00:30 crc kubenswrapper[4729]: E0127 07:00:30.473001 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6246b5cc-e7d5-4791-b188-7a4bb601ac73" containerName="pull" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.473016 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6246b5cc-e7d5-4791-b188-7a4bb601ac73" containerName="pull" Jan 27 07:00:30 crc kubenswrapper[4729]: E0127 07:00:30.473033 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6246b5cc-e7d5-4791-b188-7a4bb601ac73" containerName="extract" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.473039 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6246b5cc-e7d5-4791-b188-7a4bb601ac73" containerName="extract" Jan 27 07:00:30 crc kubenswrapper[4729]: E0127 07:00:30.473049 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6246b5cc-e7d5-4791-b188-7a4bb601ac73" containerName="util" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.473055 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6246b5cc-e7d5-4791-b188-7a4bb601ac73" containerName="util" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.473167 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6246b5cc-e7d5-4791-b188-7a4bb601ac73" containerName="extract" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.473495 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.482616 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-78j4f" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.482869 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.482928 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.482876 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.495867 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn"] Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.524522 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.479966 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.564567 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n59rz" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.601768 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-498nn\" (UniqueName: \"kubernetes.io/projected/e30eab6b-769a-40f2-9dc0-6f2c54082eca-kube-api-access-498nn\") pod \"metallb-operator-controller-manager-589bffb6f5-hrlgn\" (UID: \"e30eab6b-769a-40f2-9dc0-6f2c54082eca\") " pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.601860 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e30eab6b-769a-40f2-9dc0-6f2c54082eca-apiservice-cert\") pod \"metallb-operator-controller-manager-589bffb6f5-hrlgn\" (UID: \"e30eab6b-769a-40f2-9dc0-6f2c54082eca\") " pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.601890 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e30eab6b-769a-40f2-9dc0-6f2c54082eca-webhook-cert\") pod \"metallb-operator-controller-manager-589bffb6f5-hrlgn\" (UID: \"e30eab6b-769a-40f2-9dc0-6f2c54082eca\") " pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.708809 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e30eab6b-769a-40f2-9dc0-6f2c54082eca-webhook-cert\") pod \"metallb-operator-controller-manager-589bffb6f5-hrlgn\" (UID: \"e30eab6b-769a-40f2-9dc0-6f2c54082eca\") " pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.708936 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-498nn\" (UniqueName: \"kubernetes.io/projected/e30eab6b-769a-40f2-9dc0-6f2c54082eca-kube-api-access-498nn\") pod \"metallb-operator-controller-manager-589bffb6f5-hrlgn\" (UID: \"e30eab6b-769a-40f2-9dc0-6f2c54082eca\") " pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.709043 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e30eab6b-769a-40f2-9dc0-6f2c54082eca-apiservice-cert\") pod \"metallb-operator-controller-manager-589bffb6f5-hrlgn\" (UID: \"e30eab6b-769a-40f2-9dc0-6f2c54082eca\") " pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.713745 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e30eab6b-769a-40f2-9dc0-6f2c54082eca-apiservice-cert\") pod \"metallb-operator-controller-manager-589bffb6f5-hrlgn\" (UID: \"e30eab6b-769a-40f2-9dc0-6f2c54082eca\") " pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.727959 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e30eab6b-769a-40f2-9dc0-6f2c54082eca-webhook-cert\") pod \"metallb-operator-controller-manager-589bffb6f5-hrlgn\" (UID: \"e30eab6b-769a-40f2-9dc0-6f2c54082eca\") " pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.733917 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-498nn\" (UniqueName: \"kubernetes.io/projected/e30eab6b-769a-40f2-9dc0-6f2c54082eca-kube-api-access-498nn\") pod \"metallb-operator-controller-manager-589bffb6f5-hrlgn\" (UID: \"e30eab6b-769a-40f2-9dc0-6f2c54082eca\") " pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.804664 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.905151 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-76b44fd978-464zw"] Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.906146 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.907784 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-g5z8g" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.908000 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.911295 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 07:00:30 crc kubenswrapper[4729]: I0127 07:00:30.923366 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-76b44fd978-464zw"] Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.014688 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.014823 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c4da10b-2833-4206-900f-205d963cc173-webhook-cert\") pod \"metallb-operator-webhook-server-76b44fd978-464zw\" (UID: \"2c4da10b-2833-4206-900f-205d963cc173\") " pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.015159 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c4da10b-2833-4206-900f-205d963cc173-apiservice-cert\") pod \"metallb-operator-webhook-server-76b44fd978-464zw\" (UID: \"2c4da10b-2833-4206-900f-205d963cc173\") " pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.015225 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5jrp\" (UniqueName: \"kubernetes.io/projected/2c4da10b-2833-4206-900f-205d963cc173-kube-api-access-t5jrp\") pod \"metallb-operator-webhook-server-76b44fd978-464zw\" (UID: \"2c4da10b-2833-4206-900f-205d963cc173\") " pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.119734 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c4da10b-2833-4206-900f-205d963cc173-webhook-cert\") pod \"metallb-operator-webhook-server-76b44fd978-464zw\" (UID: \"2c4da10b-2833-4206-900f-205d963cc173\") " pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.119789 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c4da10b-2833-4206-900f-205d963cc173-apiservice-cert\") pod \"metallb-operator-webhook-server-76b44fd978-464zw\" (UID: \"2c4da10b-2833-4206-900f-205d963cc173\") " pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.119824 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5jrp\" (UniqueName: \"kubernetes.io/projected/2c4da10b-2833-4206-900f-205d963cc173-kube-api-access-t5jrp\") pod \"metallb-operator-webhook-server-76b44fd978-464zw\" (UID: \"2c4da10b-2833-4206-900f-205d963cc173\") " pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.127716 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c4da10b-2833-4206-900f-205d963cc173-apiservice-cert\") pod \"metallb-operator-webhook-server-76b44fd978-464zw\" (UID: \"2c4da10b-2833-4206-900f-205d963cc173\") " pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.136993 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c4da10b-2833-4206-900f-205d963cc173-webhook-cert\") pod \"metallb-operator-webhook-server-76b44fd978-464zw\" (UID: \"2c4da10b-2833-4206-900f-205d963cc173\") " pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.149631 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5jrp\" (UniqueName: \"kubernetes.io/projected/2c4da10b-2833-4206-900f-205d963cc173-kube-api-access-t5jrp\") pod \"metallb-operator-webhook-server-76b44fd978-464zw\" (UID: \"2c4da10b-2833-4206-900f-205d963cc173\") " pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.220170 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.247966 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn"] Jan 27 07:00:31 crc kubenswrapper[4729]: W0127 07:00:31.263145 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode30eab6b_769a_40f2_9dc0_6f2c54082eca.slice/crio-b82141f7a71fde75c770036c232319dab306c63e9e3674ee869d80bf8d82c34a WatchSource:0}: Error finding container b82141f7a71fde75c770036c232319dab306c63e9e3674ee869d80bf8d82c34a: Status 404 returned error can't find the container with id b82141f7a71fde75c770036c232319dab306c63e9e3674ee869d80bf8d82c34a Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.444177 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-76b44fd978-464zw"] Jan 27 07:00:31 crc kubenswrapper[4729]: W0127 07:00:31.447560 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c4da10b_2833_4206_900f_205d963cc173.slice/crio-f098dea217fb15c82e14dd83730cba980fec0bdf43499e23eb4abe389e47e155 WatchSource:0}: Error finding container f098dea217fb15c82e14dd83730cba980fec0bdf43499e23eb4abe389e47e155: Status 404 returned error can't find the container with id f098dea217fb15c82e14dd83730cba980fec0bdf43499e23eb4abe389e47e155 Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.932108 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" event={"ID":"e30eab6b-769a-40f2-9dc0-6f2c54082eca","Type":"ContainerStarted","Data":"b82141f7a71fde75c770036c232319dab306c63e9e3674ee869d80bf8d82c34a"} Jan 27 07:00:31 crc kubenswrapper[4729]: I0127 07:00:31.933007 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" event={"ID":"2c4da10b-2833-4206-900f-205d963cc173","Type":"ContainerStarted","Data":"f098dea217fb15c82e14dd83730cba980fec0bdf43499e23eb4abe389e47e155"} Jan 27 07:00:33 crc kubenswrapper[4729]: I0127 07:00:33.090429 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7z4kb"] Jan 27 07:00:33 crc kubenswrapper[4729]: I0127 07:00:33.090622 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7z4kb" podUID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerName="registry-server" containerID="cri-o://233ca6ce85305aa954e5c99fc366d33df3fdded5497e9b7d1bf1b275dd28b3d3" gracePeriod=2 Jan 27 07:00:33 crc kubenswrapper[4729]: I0127 07:00:33.589642 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n59rz"] Jan 27 07:00:33 crc kubenswrapper[4729]: I0127 07:00:33.947305 4729 generic.go:334] "Generic (PLEG): container finished" podID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerID="233ca6ce85305aa954e5c99fc366d33df3fdded5497e9b7d1bf1b275dd28b3d3" exitCode=0 Jan 27 07:00:33 crc kubenswrapper[4729]: I0127 07:00:33.947371 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z4kb" event={"ID":"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7","Type":"ContainerDied","Data":"233ca6ce85305aa954e5c99fc366d33df3fdded5497e9b7d1bf1b275dd28b3d3"} Jan 27 07:00:34 crc kubenswrapper[4729]: I0127 07:00:34.087956 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hcqgr"] Jan 27 07:00:34 crc kubenswrapper[4729]: I0127 07:00:34.088185 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hcqgr" podUID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerName="registry-server" containerID="cri-o://d5aacb7d7bce31cc955ea6a799091d4b1b8282fdfe9a0adff3738053879dfab0" gracePeriod=2 Jan 27 07:00:34 crc kubenswrapper[4729]: I0127 07:00:34.957889 4729 generic.go:334] "Generic (PLEG): container finished" podID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerID="d5aacb7d7bce31cc955ea6a799091d4b1b8282fdfe9a0adff3738053879dfab0" exitCode=0 Jan 27 07:00:34 crc kubenswrapper[4729]: I0127 07:00:34.957926 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcqgr" event={"ID":"529eb2a1-6122-4897-90c9-3212a2de14e1","Type":"ContainerDied","Data":"d5aacb7d7bce31cc955ea6a799091d4b1b8282fdfe9a0adff3738053879dfab0"} Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.217133 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.298740 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhzf7\" (UniqueName: \"kubernetes.io/projected/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-kube-api-access-bhzf7\") pod \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.298818 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-catalog-content\") pod \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.298848 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-utilities\") pod \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\" (UID: \"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7\") " Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.300009 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-utilities" (OuterVolumeSpecName: "utilities") pod "fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" (UID: "fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.304683 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-kube-api-access-bhzf7" (OuterVolumeSpecName: "kube-api-access-bhzf7") pod "fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" (UID: "fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7"). InnerVolumeSpecName "kube-api-access-bhzf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.400173 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.400205 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhzf7\" (UniqueName: \"kubernetes.io/projected/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-kube-api-access-bhzf7\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.431096 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" (UID: "fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.500647 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.965826 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z4kb" event={"ID":"fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7","Type":"ContainerDied","Data":"46f0980c116c43abfc57e66a2d099c2a2695baf4763bf846fd53f5656f2d3e4e"} Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.965865 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z4kb" Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.965873 4729 scope.go:117] "RemoveContainer" containerID="233ca6ce85305aa954e5c99fc366d33df3fdded5497e9b7d1bf1b275dd28b3d3" Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.990679 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7z4kb"] Jan 27 07:00:35 crc kubenswrapper[4729]: I0127 07:00:35.993806 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7z4kb"] Jan 27 07:00:36 crc kubenswrapper[4729]: I0127 07:00:36.372756 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" path="/var/lib/kubelet/pods/fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7/volumes" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.468821 4729 scope.go:117] "RemoveContainer" containerID="56901708d7e6e380d25f9fbd4f22055c99a38a00cec55d173d917d1123c6049b" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.511020 4729 scope.go:117] "RemoveContainer" containerID="371152fb65af385e25e18da290c951ddc3ed87c221ebad45ee1b80f4f524cd0e" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.738922 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.837465 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-catalog-content\") pod \"529eb2a1-6122-4897-90c9-3212a2de14e1\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.837523 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgxt2\" (UniqueName: \"kubernetes.io/projected/529eb2a1-6122-4897-90c9-3212a2de14e1-kube-api-access-jgxt2\") pod \"529eb2a1-6122-4897-90c9-3212a2de14e1\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.837588 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-utilities\") pod \"529eb2a1-6122-4897-90c9-3212a2de14e1\" (UID: \"529eb2a1-6122-4897-90c9-3212a2de14e1\") " Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.838503 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-utilities" (OuterVolumeSpecName: "utilities") pod "529eb2a1-6122-4897-90c9-3212a2de14e1" (UID: "529eb2a1-6122-4897-90c9-3212a2de14e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.842268 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529eb2a1-6122-4897-90c9-3212a2de14e1-kube-api-access-jgxt2" (OuterVolumeSpecName: "kube-api-access-jgxt2") pod "529eb2a1-6122-4897-90c9-3212a2de14e1" (UID: "529eb2a1-6122-4897-90c9-3212a2de14e1"). InnerVolumeSpecName "kube-api-access-jgxt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.896393 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "529eb2a1-6122-4897-90c9-3212a2de14e1" (UID: "529eb2a1-6122-4897-90c9-3212a2de14e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.938423 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.938458 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/529eb2a1-6122-4897-90c9-3212a2de14e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.938470 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgxt2\" (UniqueName: \"kubernetes.io/projected/529eb2a1-6122-4897-90c9-3212a2de14e1-kube-api-access-jgxt2\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.977287 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" event={"ID":"e30eab6b-769a-40f2-9dc0-6f2c54082eca","Type":"ContainerStarted","Data":"6b199df957988dba8c481274849856e75f9c0882225fac4c7b1c473a922eb2f4"} Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.978066 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.980519 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcqgr" event={"ID":"529eb2a1-6122-4897-90c9-3212a2de14e1","Type":"ContainerDied","Data":"1956baeadf77ccb26b2ac41ae745ed7efb4c01fc9fbe94814a37347a5da852fe"} Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.980550 4729 scope.go:117] "RemoveContainer" containerID="d5aacb7d7bce31cc955ea6a799091d4b1b8282fdfe9a0adff3738053879dfab0" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.980588 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcqgr" Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.985182 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" event={"ID":"2c4da10b-2833-4206-900f-205d963cc173","Type":"ContainerStarted","Data":"c8a7c84f58435fc44bfea1ad62e640182e6d8a83c61de5b723d98140c70c301d"} Jan 27 07:00:37 crc kubenswrapper[4729]: I0127 07:00:37.985704 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:00:38 crc kubenswrapper[4729]: I0127 07:00:38.003458 4729 scope.go:117] "RemoveContainer" containerID="94a5b532ca41b3fe95061fded994f9d783cf95cb2d5775aba83f7abf5f94057c" Jan 27 07:00:38 crc kubenswrapper[4729]: I0127 07:00:38.032394 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" podStartSLOduration=1.829040789 podStartE2EDuration="8.032378817s" podCreationTimestamp="2026-01-27 07:00:30 +0000 UTC" firstStartedPulling="2026-01-27 07:00:31.266409086 +0000 UTC m=+796.333530339" lastFinishedPulling="2026-01-27 07:00:37.469747104 +0000 UTC m=+802.536868367" observedRunningTime="2026-01-27 07:00:38.030896948 +0000 UTC m=+803.098018201" watchObservedRunningTime="2026-01-27 07:00:38.032378817 +0000 UTC m=+803.099500080" Jan 27 07:00:38 crc kubenswrapper[4729]: I0127 07:00:38.033421 4729 scope.go:117] "RemoveContainer" containerID="4677de50d89eacb120a03523432103274d5e538c679a07171688f249d4b5bfc2" Jan 27 07:00:38 crc kubenswrapper[4729]: I0127 07:00:38.055510 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" podStartSLOduration=1.960862847 podStartE2EDuration="8.055495066s" podCreationTimestamp="2026-01-27 07:00:30 +0000 UTC" firstStartedPulling="2026-01-27 07:00:31.450086568 +0000 UTC m=+796.517207831" lastFinishedPulling="2026-01-27 07:00:37.544718787 +0000 UTC m=+802.611840050" observedRunningTime="2026-01-27 07:00:38.053348847 +0000 UTC m=+803.120470110" watchObservedRunningTime="2026-01-27 07:00:38.055495066 +0000 UTC m=+803.122616329" Jan 27 07:00:38 crc kubenswrapper[4729]: I0127 07:00:38.083428 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hcqgr"] Jan 27 07:00:38 crc kubenswrapper[4729]: I0127 07:00:38.087524 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hcqgr"] Jan 27 07:00:38 crc kubenswrapper[4729]: I0127 07:00:38.368430 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529eb2a1-6122-4897-90c9-3212a2de14e1" path="/var/lib/kubelet/pods/529eb2a1-6122-4897-90c9-3212a2de14e1/volumes" Jan 27 07:00:51 crc kubenswrapper[4729]: I0127 07:00:51.224745 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-76b44fd978-464zw" Jan 27 07:01:10 crc kubenswrapper[4729]: I0127 07:01:10.808386 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-589bffb6f5-hrlgn" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.703467 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd"] Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.703730 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerName="registry-server" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.703752 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerName="registry-server" Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.703771 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerName="extract-content" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.703779 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerName="extract-content" Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.703791 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerName="extract-content" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.703799 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerName="extract-content" Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.703808 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerName="extract-utilities" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.703814 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerName="extract-utilities" Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.703828 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerName="extract-utilities" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.703835 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerName="extract-utilities" Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.703845 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerName="registry-server" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.703852 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerName="registry-server" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.703957 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc8d0b2-2073-44e5-ba73-c6a4b4f698d7" containerName="registry-server" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.703977 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="529eb2a1-6122-4897-90c9-3212a2de14e1" containerName="registry-server" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.705125 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.708918 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb874789-6f11-4e02-93ea-4db078896622-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gt2zd\" (UID: \"cb874789-6f11-4e02-93ea-4db078896622\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.709028 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwmf\" (UniqueName: \"kubernetes.io/projected/cb874789-6f11-4e02-93ea-4db078896622-kube-api-access-9lwmf\") pod \"frr-k8s-webhook-server-7df86c4f6c-gt2zd\" (UID: \"cb874789-6f11-4e02-93ea-4db078896622\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.709444 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-k7952"] Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.714298 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.714874 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qx2m8" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.719980 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.722506 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.722526 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.769408 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd"] Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.804118 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jg8br"] Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.804938 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.809884 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwmf\" (UniqueName: \"kubernetes.io/projected/cb874789-6f11-4e02-93ea-4db078896622-kube-api-access-9lwmf\") pod \"frr-k8s-webhook-server-7df86c4f6c-gt2zd\" (UID: \"cb874789-6f11-4e02-93ea-4db078896622\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.809926 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e76713c-f4f5-4566-b8b9-3e125e997b1d-metrics-certs\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.809949 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt22n\" (UniqueName: \"kubernetes.io/projected/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-kube-api-access-zt22n\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.810007 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-frr-sockets\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.810030 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb874789-6f11-4e02-93ea-4db078896622-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gt2zd\" (UID: \"cb874789-6f11-4e02-93ea-4db078896622\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.810116 4729 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.810152 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb874789-6f11-4e02-93ea-4db078896622-cert podName:cb874789-6f11-4e02-93ea-4db078896622 nodeName:}" failed. No retries permitted until 2026-01-27 07:01:12.310138774 +0000 UTC m=+837.377260037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cb874789-6f11-4e02-93ea-4db078896622-cert") pod "frr-k8s-webhook-server-7df86c4f6c-gt2zd" (UID: "cb874789-6f11-4e02-93ea-4db078896622") : secret "frr-k8s-webhook-server-cert" not found Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.810047 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-metallb-excludel2\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.810198 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0e76713c-f4f5-4566-b8b9-3e125e997b1d-frr-startup\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.810233 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-metrics\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.810269 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-memberlist\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.810294 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-reloader\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.810308 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-frr-conf\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.810327 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-metrics-certs\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.810348 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wgj\" (UniqueName: \"kubernetes.io/projected/0e76713c-f4f5-4566-b8b9-3e125e997b1d-kube-api-access-j2wgj\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.819818 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-drtzd"] Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.820571 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.826683 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.832491 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.832546 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qvw5f" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.832647 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.832628 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.844563 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-drtzd"] Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.850943 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwmf\" (UniqueName: \"kubernetes.io/projected/cb874789-6f11-4e02-93ea-4db078896622-kube-api-access-9lwmf\") pod \"frr-k8s-webhook-server-7df86c4f6c-gt2zd\" (UID: \"cb874789-6f11-4e02-93ea-4db078896622\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.911698 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wgj\" (UniqueName: \"kubernetes.io/projected/0e76713c-f4f5-4566-b8b9-3e125e997b1d-kube-api-access-j2wgj\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.911746 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55xxr\" (UniqueName: \"kubernetes.io/projected/0dad915a-2911-45cf-9c6c-ca28066dfc55-kube-api-access-55xxr\") pod \"controller-6968d8fdc4-drtzd\" (UID: \"0dad915a-2911-45cf-9c6c-ca28066dfc55\") " pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.911778 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e76713c-f4f5-4566-b8b9-3e125e997b1d-metrics-certs\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.911797 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0dad915a-2911-45cf-9c6c-ca28066dfc55-metrics-certs\") pod \"controller-6968d8fdc4-drtzd\" (UID: \"0dad915a-2911-45cf-9c6c-ca28066dfc55\") " pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.911870 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt22n\" (UniqueName: \"kubernetes.io/projected/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-kube-api-access-zt22n\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.912203 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-frr-sockets\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.912238 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-metallb-excludel2\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.912613 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-frr-sockets\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.921485 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-metallb-excludel2\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.921656 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0e76713c-f4f5-4566-b8b9-3e125e997b1d-frr-startup\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.921702 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-metrics\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.921726 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0dad915a-2911-45cf-9c6c-ca28066dfc55-cert\") pod \"controller-6968d8fdc4-drtzd\" (UID: \"0dad915a-2911-45cf-9c6c-ca28066dfc55\") " pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.921955 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-metrics\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.922049 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-memberlist\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.922190 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-frr-conf\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.922146 4729 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.922280 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-memberlist podName:603cd317-f05d-4596-9dc9-4a7c55b1f1f4 nodeName:}" failed. No retries permitted until 2026-01-27 07:01:12.422264253 +0000 UTC m=+837.489385516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-memberlist") pod "speaker-jg8br" (UID: "603cd317-f05d-4596-9dc9-4a7c55b1f1f4") : secret "metallb-memberlist" not found Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.922422 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0e76713c-f4f5-4566-b8b9-3e125e997b1d-frr-startup\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.922519 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-reloader\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.922591 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-frr-conf\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.922758 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0e76713c-f4f5-4566-b8b9-3e125e997b1d-reloader\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.922776 4729 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.922792 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-metrics-certs\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: E0127 07:01:11.922827 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-metrics-certs podName:603cd317-f05d-4596-9dc9-4a7c55b1f1f4 nodeName:}" failed. No retries permitted until 2026-01-27 07:01:12.422808681 +0000 UTC m=+837.489929944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-metrics-certs") pod "speaker-jg8br" (UID: "603cd317-f05d-4596-9dc9-4a7c55b1f1f4") : secret "speaker-certs-secret" not found Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.927457 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt22n\" (UniqueName: \"kubernetes.io/projected/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-kube-api-access-zt22n\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.928409 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e76713c-f4f5-4566-b8b9-3e125e997b1d-metrics-certs\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:11 crc kubenswrapper[4729]: I0127 07:01:11.949599 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wgj\" (UniqueName: \"kubernetes.io/projected/0e76713c-f4f5-4566-b8b9-3e125e997b1d-kube-api-access-j2wgj\") pod \"frr-k8s-k7952\" (UID: \"0e76713c-f4f5-4566-b8b9-3e125e997b1d\") " pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.023843 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0dad915a-2911-45cf-9c6c-ca28066dfc55-metrics-certs\") pod \"controller-6968d8fdc4-drtzd\" (UID: \"0dad915a-2911-45cf-9c6c-ca28066dfc55\") " pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.023926 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0dad915a-2911-45cf-9c6c-ca28066dfc55-cert\") pod \"controller-6968d8fdc4-drtzd\" (UID: \"0dad915a-2911-45cf-9c6c-ca28066dfc55\") " pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.024007 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55xxr\" (UniqueName: \"kubernetes.io/projected/0dad915a-2911-45cf-9c6c-ca28066dfc55-kube-api-access-55xxr\") pod \"controller-6968d8fdc4-drtzd\" (UID: \"0dad915a-2911-45cf-9c6c-ca28066dfc55\") " pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.027585 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0dad915a-2911-45cf-9c6c-ca28066dfc55-cert\") pod \"controller-6968d8fdc4-drtzd\" (UID: \"0dad915a-2911-45cf-9c6c-ca28066dfc55\") " pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.028164 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0dad915a-2911-45cf-9c6c-ca28066dfc55-metrics-certs\") pod \"controller-6968d8fdc4-drtzd\" (UID: \"0dad915a-2911-45cf-9c6c-ca28066dfc55\") " pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.047687 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.058971 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55xxr\" (UniqueName: \"kubernetes.io/projected/0dad915a-2911-45cf-9c6c-ca28066dfc55-kube-api-access-55xxr\") pod \"controller-6968d8fdc4-drtzd\" (UID: \"0dad915a-2911-45cf-9c6c-ca28066dfc55\") " pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.131996 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.214804 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k7952" event={"ID":"0e76713c-f4f5-4566-b8b9-3e125e997b1d","Type":"ContainerStarted","Data":"48888dc6c9cb772cfe69b49ed1b920d770198e794431131332a894f22047f2ae"} Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.328193 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb874789-6f11-4e02-93ea-4db078896622-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gt2zd\" (UID: \"cb874789-6f11-4e02-93ea-4db078896622\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.333497 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb874789-6f11-4e02-93ea-4db078896622-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gt2zd\" (UID: \"cb874789-6f11-4e02-93ea-4db078896622\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.346104 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-drtzd"] Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.430323 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-memberlist\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.430381 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-metrics-certs\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:12 crc kubenswrapper[4729]: E0127 07:01:12.433332 4729 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 07:01:12 crc kubenswrapper[4729]: E0127 07:01:12.433397 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-memberlist podName:603cd317-f05d-4596-9dc9-4a7c55b1f1f4 nodeName:}" failed. No retries permitted until 2026-01-27 07:01:13.433378814 +0000 UTC m=+838.500500087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-memberlist") pod "speaker-jg8br" (UID: "603cd317-f05d-4596-9dc9-4a7c55b1f1f4") : secret "metallb-memberlist" not found Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.438129 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-metrics-certs\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.632811 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:12 crc kubenswrapper[4729]: I0127 07:01:12.854725 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd"] Jan 27 07:01:13 crc kubenswrapper[4729]: I0127 07:01:13.226200 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" event={"ID":"cb874789-6f11-4e02-93ea-4db078896622","Type":"ContainerStarted","Data":"1f84988146ce9d635db6a5c5b9153f167c31350e53c796a6b24669f1b9083030"} Jan 27 07:01:13 crc kubenswrapper[4729]: I0127 07:01:13.228787 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-drtzd" event={"ID":"0dad915a-2911-45cf-9c6c-ca28066dfc55","Type":"ContainerStarted","Data":"495cded1d26dca57c902506e0f2f1cccf763417152314e54756f3a5327dacd5c"} Jan 27 07:01:13 crc kubenswrapper[4729]: I0127 07:01:13.228864 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-drtzd" event={"ID":"0dad915a-2911-45cf-9c6c-ca28066dfc55","Type":"ContainerStarted","Data":"6b65d57bbb4ccd1a369c88a83b5c10562bbc0ef1603b436c82faa39432f9f4e0"} Jan 27 07:01:13 crc kubenswrapper[4729]: I0127 07:01:13.228894 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-drtzd" event={"ID":"0dad915a-2911-45cf-9c6c-ca28066dfc55","Type":"ContainerStarted","Data":"da334670b8cae90d5ecdded9c8448d1e9a05b3c21568f5cb6c948748625a4dd9"} Jan 27 07:01:13 crc kubenswrapper[4729]: I0127 07:01:13.228946 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:13 crc kubenswrapper[4729]: I0127 07:01:13.257376 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-drtzd" podStartSLOduration=2.2573628 podStartE2EDuration="2.2573628s" podCreationTimestamp="2026-01-27 07:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:01:13.255897893 +0000 UTC m=+838.323019196" watchObservedRunningTime="2026-01-27 07:01:13.2573628 +0000 UTC m=+838.324484063" Jan 27 07:01:13 crc kubenswrapper[4729]: I0127 07:01:13.442341 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-memberlist\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:13 crc kubenswrapper[4729]: I0127 07:01:13.447110 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/603cd317-f05d-4596-9dc9-4a7c55b1f1f4-memberlist\") pod \"speaker-jg8br\" (UID: \"603cd317-f05d-4596-9dc9-4a7c55b1f1f4\") " pod="metallb-system/speaker-jg8br" Jan 27 07:01:13 crc kubenswrapper[4729]: I0127 07:01:13.618550 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jg8br" Jan 27 07:01:13 crc kubenswrapper[4729]: W0127 07:01:13.647657 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod603cd317_f05d_4596_9dc9_4a7c55b1f1f4.slice/crio-d4547352013fa2210e781261e694d3011d5d2ada2342fa29810ef615f26f6cc8 WatchSource:0}: Error finding container d4547352013fa2210e781261e694d3011d5d2ada2342fa29810ef615f26f6cc8: Status 404 returned error can't find the container with id d4547352013fa2210e781261e694d3011d5d2ada2342fa29810ef615f26f6cc8 Jan 27 07:01:14 crc kubenswrapper[4729]: I0127 07:01:14.236601 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jg8br" event={"ID":"603cd317-f05d-4596-9dc9-4a7c55b1f1f4","Type":"ContainerStarted","Data":"ac931317dadd96099d839cf504e7381339a002d38539008b684dabde348da9f3"} Jan 27 07:01:14 crc kubenswrapper[4729]: I0127 07:01:14.236861 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jg8br" event={"ID":"603cd317-f05d-4596-9dc9-4a7c55b1f1f4","Type":"ContainerStarted","Data":"877434c3e57a7fbcd1af10326a03488f1a530a3fc2de0f7d10ebf76bc4f7ade9"} Jan 27 07:01:14 crc kubenswrapper[4729]: I0127 07:01:14.236871 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jg8br" event={"ID":"603cd317-f05d-4596-9dc9-4a7c55b1f1f4","Type":"ContainerStarted","Data":"d4547352013fa2210e781261e694d3011d5d2ada2342fa29810ef615f26f6cc8"} Jan 27 07:01:14 crc kubenswrapper[4729]: I0127 07:01:14.237440 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jg8br" Jan 27 07:01:14 crc kubenswrapper[4729]: I0127 07:01:14.259181 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jg8br" podStartSLOduration=3.2591651280000002 podStartE2EDuration="3.259165128s" podCreationTimestamp="2026-01-27 07:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:01:14.256239773 +0000 UTC m=+839.323361036" watchObservedRunningTime="2026-01-27 07:01:14.259165128 +0000 UTC m=+839.326286391" Jan 27 07:01:20 crc kubenswrapper[4729]: I0127 07:01:20.280064 4729 generic.go:334] "Generic (PLEG): container finished" podID="0e76713c-f4f5-4566-b8b9-3e125e997b1d" containerID="ff22fc1c6869f18e4c21f03deb77d07a710363491d77d12aecc15dadf8f7e36a" exitCode=0 Jan 27 07:01:20 crc kubenswrapper[4729]: I0127 07:01:20.280656 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k7952" event={"ID":"0e76713c-f4f5-4566-b8b9-3e125e997b1d","Type":"ContainerDied","Data":"ff22fc1c6869f18e4c21f03deb77d07a710363491d77d12aecc15dadf8f7e36a"} Jan 27 07:01:21 crc kubenswrapper[4729]: I0127 07:01:21.292619 4729 generic.go:334] "Generic (PLEG): container finished" podID="0e76713c-f4f5-4566-b8b9-3e125e997b1d" containerID="2b06dcf3920bd7b32073b915923af2d5c4b5c6072293be83438244aa320c63db" exitCode=0 Jan 27 07:01:21 crc kubenswrapper[4729]: I0127 07:01:21.292723 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k7952" event={"ID":"0e76713c-f4f5-4566-b8b9-3e125e997b1d","Type":"ContainerDied","Data":"2b06dcf3920bd7b32073b915923af2d5c4b5c6072293be83438244aa320c63db"} Jan 27 07:01:21 crc kubenswrapper[4729]: I0127 07:01:21.299076 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" event={"ID":"cb874789-6f11-4e02-93ea-4db078896622","Type":"ContainerStarted","Data":"0f0eb7d66ffb3883235f32587c23a64a5cddcc5dbf0c36987031dec149c5ccd7"} Jan 27 07:01:21 crc kubenswrapper[4729]: I0127 07:01:21.299416 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:21 crc kubenswrapper[4729]: I0127 07:01:21.378630 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" podStartSLOduration=3.085351803 podStartE2EDuration="10.378597227s" podCreationTimestamp="2026-01-27 07:01:11 +0000 UTC" firstStartedPulling="2026-01-27 07:01:12.867749314 +0000 UTC m=+837.934870577" lastFinishedPulling="2026-01-27 07:01:20.160994718 +0000 UTC m=+845.228116001" observedRunningTime="2026-01-27 07:01:21.3729992 +0000 UTC m=+846.440120473" watchObservedRunningTime="2026-01-27 07:01:21.378597227 +0000 UTC m=+846.445718530" Jan 27 07:01:22 crc kubenswrapper[4729]: I0127 07:01:22.138787 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-drtzd" Jan 27 07:01:22 crc kubenswrapper[4729]: I0127 07:01:22.308303 4729 generic.go:334] "Generic (PLEG): container finished" podID="0e76713c-f4f5-4566-b8b9-3e125e997b1d" containerID="c229e91f8d1597775ee11077e189b28e6316d82080ff330adca61a5acb186e93" exitCode=0 Jan 27 07:01:22 crc kubenswrapper[4729]: I0127 07:01:22.310476 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k7952" event={"ID":"0e76713c-f4f5-4566-b8b9-3e125e997b1d","Type":"ContainerDied","Data":"c229e91f8d1597775ee11077e189b28e6316d82080ff330adca61a5acb186e93"} Jan 27 07:01:23 crc kubenswrapper[4729]: I0127 07:01:23.326822 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k7952" event={"ID":"0e76713c-f4f5-4566-b8b9-3e125e997b1d","Type":"ContainerStarted","Data":"fb3fe8257458953a050748f8e31428f978d996102bc0cebc5ca9325878073570"} Jan 27 07:01:23 crc kubenswrapper[4729]: I0127 07:01:23.326864 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k7952" event={"ID":"0e76713c-f4f5-4566-b8b9-3e125e997b1d","Type":"ContainerStarted","Data":"a72a7a011bd48a3f81f486c99e19e7d79fc55fc60b6f5ab8fca746364eb34172"} Jan 27 07:01:23 crc kubenswrapper[4729]: I0127 07:01:23.326874 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k7952" event={"ID":"0e76713c-f4f5-4566-b8b9-3e125e997b1d","Type":"ContainerStarted","Data":"604e7173ca612e316cc8e415155fc1c681f68745ea3e2b830a49d89848f75b9b"} Jan 27 07:01:23 crc kubenswrapper[4729]: I0127 07:01:23.326884 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k7952" event={"ID":"0e76713c-f4f5-4566-b8b9-3e125e997b1d","Type":"ContainerStarted","Data":"3a29f97efaac9d71372d61b79ef339f33ae314730d5be34fbafb67fc8e673980"} Jan 27 07:01:23 crc kubenswrapper[4729]: I0127 07:01:23.326892 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k7952" event={"ID":"0e76713c-f4f5-4566-b8b9-3e125e997b1d","Type":"ContainerStarted","Data":"4fb857e26c29ab0862ac41929d97497b5be12a4e3aa8df227bff7e7daa0eb92d"} Jan 27 07:01:23 crc kubenswrapper[4729]: I0127 07:01:23.624529 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jg8br" Jan 27 07:01:24 crc kubenswrapper[4729]: I0127 07:01:24.341183 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k7952" event={"ID":"0e76713c-f4f5-4566-b8b9-3e125e997b1d","Type":"ContainerStarted","Data":"6b41af09a75ca9dac61e17023f47ef56a3197ea6ea2864cd57bfb610df56110e"} Jan 27 07:01:24 crc kubenswrapper[4729]: I0127 07:01:24.341715 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:24 crc kubenswrapper[4729]: I0127 07:01:24.389171 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-k7952" podStartSLOduration=5.419572918 podStartE2EDuration="13.389148684s" podCreationTimestamp="2026-01-27 07:01:11 +0000 UTC" firstStartedPulling="2026-01-27 07:01:12.154814012 +0000 UTC m=+837.221935275" lastFinishedPulling="2026-01-27 07:01:20.124389768 +0000 UTC m=+845.191511041" observedRunningTime="2026-01-27 07:01:24.385113207 +0000 UTC m=+849.452234510" watchObservedRunningTime="2026-01-27 07:01:24.389148684 +0000 UTC m=+849.456269987" Jan 27 07:01:26 crc kubenswrapper[4729]: I0127 07:01:26.744831 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mzp9m"] Jan 27 07:01:26 crc kubenswrapper[4729]: I0127 07:01:26.745555 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mzp9m" Jan 27 07:01:26 crc kubenswrapper[4729]: I0127 07:01:26.748316 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 07:01:26 crc kubenswrapper[4729]: I0127 07:01:26.749919 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-qtw2b" Jan 27 07:01:26 crc kubenswrapper[4729]: I0127 07:01:26.750782 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 07:01:26 crc kubenswrapper[4729]: I0127 07:01:26.780349 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mzp9m"] Jan 27 07:01:26 crc kubenswrapper[4729]: I0127 07:01:26.864913 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtnlx\" (UniqueName: \"kubernetes.io/projected/283c049a-2816-4d86-b542-dda05086cfd4-kube-api-access-gtnlx\") pod \"openstack-operator-index-mzp9m\" (UID: \"283c049a-2816-4d86-b542-dda05086cfd4\") " pod="openstack-operators/openstack-operator-index-mzp9m" Jan 27 07:01:26 crc kubenswrapper[4729]: I0127 07:01:26.965657 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtnlx\" (UniqueName: \"kubernetes.io/projected/283c049a-2816-4d86-b542-dda05086cfd4-kube-api-access-gtnlx\") pod \"openstack-operator-index-mzp9m\" (UID: \"283c049a-2816-4d86-b542-dda05086cfd4\") " pod="openstack-operators/openstack-operator-index-mzp9m" Jan 27 07:01:26 crc kubenswrapper[4729]: I0127 07:01:26.991874 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtnlx\" (UniqueName: \"kubernetes.io/projected/283c049a-2816-4d86-b542-dda05086cfd4-kube-api-access-gtnlx\") pod \"openstack-operator-index-mzp9m\" (UID: \"283c049a-2816-4d86-b542-dda05086cfd4\") " pod="openstack-operators/openstack-operator-index-mzp9m" Jan 27 07:01:27 crc kubenswrapper[4729]: I0127 07:01:27.048611 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:27 crc kubenswrapper[4729]: I0127 07:01:27.095408 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:27 crc kubenswrapper[4729]: I0127 07:01:27.126402 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mzp9m" Jan 27 07:01:27 crc kubenswrapper[4729]: W0127 07:01:27.642916 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod283c049a_2816_4d86_b542_dda05086cfd4.slice/crio-bbdea5fe3665110be5e2e3076f28b486b6c176dda98b5d9bb9582de8f733617e WatchSource:0}: Error finding container bbdea5fe3665110be5e2e3076f28b486b6c176dda98b5d9bb9582de8f733617e: Status 404 returned error can't find the container with id bbdea5fe3665110be5e2e3076f28b486b6c176dda98b5d9bb9582de8f733617e Jan 27 07:01:27 crc kubenswrapper[4729]: I0127 07:01:27.644837 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mzp9m"] Jan 27 07:01:28 crc kubenswrapper[4729]: I0127 07:01:28.376620 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mzp9m" event={"ID":"283c049a-2816-4d86-b542-dda05086cfd4","Type":"ContainerStarted","Data":"bbdea5fe3665110be5e2e3076f28b486b6c176dda98b5d9bb9582de8f733617e"} Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.110888 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mzp9m"] Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.382482 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mzp9m" event={"ID":"283c049a-2816-4d86-b542-dda05086cfd4","Type":"ContainerStarted","Data":"a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436"} Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.382635 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-mzp9m" podUID="283c049a-2816-4d86-b542-dda05086cfd4" containerName="registry-server" containerID="cri-o://a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436" gracePeriod=2 Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.400582 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mzp9m" podStartSLOduration=1.885698047 podStartE2EDuration="4.400566382s" podCreationTimestamp="2026-01-27 07:01:26 +0000 UTC" firstStartedPulling="2026-01-27 07:01:27.648573631 +0000 UTC m=+852.715694904" lastFinishedPulling="2026-01-27 07:01:30.163441966 +0000 UTC m=+855.230563239" observedRunningTime="2026-01-27 07:01:30.397557037 +0000 UTC m=+855.464678340" watchObservedRunningTime="2026-01-27 07:01:30.400566382 +0000 UTC m=+855.467687635" Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.722523 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-57z7c"] Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.723748 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-57z7c" Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.734567 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-57z7c"] Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.758641 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mzp9m" Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.825903 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtnlx\" (UniqueName: \"kubernetes.io/projected/283c049a-2816-4d86-b542-dda05086cfd4-kube-api-access-gtnlx\") pod \"283c049a-2816-4d86-b542-dda05086cfd4\" (UID: \"283c049a-2816-4d86-b542-dda05086cfd4\") " Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.826169 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx2nw\" (UniqueName: \"kubernetes.io/projected/a6a81d4c-4fb0-4e4f-a09e-0540bbd2f702-kube-api-access-wx2nw\") pod \"openstack-operator-index-57z7c\" (UID: \"a6a81d4c-4fb0-4e4f-a09e-0540bbd2f702\") " pod="openstack-operators/openstack-operator-index-57z7c" Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.830973 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283c049a-2816-4d86-b542-dda05086cfd4-kube-api-access-gtnlx" (OuterVolumeSpecName: "kube-api-access-gtnlx") pod "283c049a-2816-4d86-b542-dda05086cfd4" (UID: "283c049a-2816-4d86-b542-dda05086cfd4"). InnerVolumeSpecName "kube-api-access-gtnlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.927853 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx2nw\" (UniqueName: \"kubernetes.io/projected/a6a81d4c-4fb0-4e4f-a09e-0540bbd2f702-kube-api-access-wx2nw\") pod \"openstack-operator-index-57z7c\" (UID: \"a6a81d4c-4fb0-4e4f-a09e-0540bbd2f702\") " pod="openstack-operators/openstack-operator-index-57z7c" Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.928003 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtnlx\" (UniqueName: \"kubernetes.io/projected/283c049a-2816-4d86-b542-dda05086cfd4-kube-api-access-gtnlx\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:30 crc kubenswrapper[4729]: I0127 07:01:30.944463 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx2nw\" (UniqueName: \"kubernetes.io/projected/a6a81d4c-4fb0-4e4f-a09e-0540bbd2f702-kube-api-access-wx2nw\") pod \"openstack-operator-index-57z7c\" (UID: \"a6a81d4c-4fb0-4e4f-a09e-0540bbd2f702\") " pod="openstack-operators/openstack-operator-index-57z7c" Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.068492 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-57z7c" Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.386870 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-57z7c"] Jan 27 07:01:31 crc kubenswrapper[4729]: W0127 07:01:31.391869 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a81d4c_4fb0_4e4f_a09e_0540bbd2f702.slice/crio-1f210657170f85074d45771a2b570eab81e0d9a25b0e208d8fdb18c0112ab8a6 WatchSource:0}: Error finding container 1f210657170f85074d45771a2b570eab81e0d9a25b0e208d8fdb18c0112ab8a6: Status 404 returned error can't find the container with id 1f210657170f85074d45771a2b570eab81e0d9a25b0e208d8fdb18c0112ab8a6 Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.393265 4729 generic.go:334] "Generic (PLEG): container finished" podID="283c049a-2816-4d86-b542-dda05086cfd4" containerID="a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436" exitCode=0 Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.393298 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mzp9m" event={"ID":"283c049a-2816-4d86-b542-dda05086cfd4","Type":"ContainerDied","Data":"a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436"} Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.393318 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mzp9m" event={"ID":"283c049a-2816-4d86-b542-dda05086cfd4","Type":"ContainerDied","Data":"bbdea5fe3665110be5e2e3076f28b486b6c176dda98b5d9bb9582de8f733617e"} Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.393366 4729 scope.go:117] "RemoveContainer" containerID="a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436" Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.393541 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mzp9m" Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.437226 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mzp9m"] Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.442206 4729 scope.go:117] "RemoveContainer" containerID="a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436" Jan 27 07:01:31 crc kubenswrapper[4729]: E0127 07:01:31.442654 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436\": container with ID starting with a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436 not found: ID does not exist" containerID="a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436" Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.442690 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436"} err="failed to get container status \"a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436\": rpc error: code = NotFound desc = could not find container \"a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436\": container with ID starting with a7a890cf9497464b86dafc2ac85af3a23ea1f7840d3ec3bdf4b8796993dfb436 not found: ID does not exist" Jan 27 07:01:31 crc kubenswrapper[4729]: I0127 07:01:31.444113 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-mzp9m"] Jan 27 07:01:32 crc kubenswrapper[4729]: I0127 07:01:32.051009 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-k7952" Jan 27 07:01:32 crc kubenswrapper[4729]: I0127 07:01:32.374540 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="283c049a-2816-4d86-b542-dda05086cfd4" path="/var/lib/kubelet/pods/283c049a-2816-4d86-b542-dda05086cfd4/volumes" Jan 27 07:01:32 crc kubenswrapper[4729]: I0127 07:01:32.401585 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-57z7c" event={"ID":"a6a81d4c-4fb0-4e4f-a09e-0540bbd2f702","Type":"ContainerStarted","Data":"f054d3e356d5abff294e9a4e8dfb6d3c2f53600b859ea2e8dc2f9032998bbd4e"} Jan 27 07:01:32 crc kubenswrapper[4729]: I0127 07:01:32.401646 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-57z7c" event={"ID":"a6a81d4c-4fb0-4e4f-a09e-0540bbd2f702","Type":"ContainerStarted","Data":"1f210657170f85074d45771a2b570eab81e0d9a25b0e208d8fdb18c0112ab8a6"} Jan 27 07:01:32 crc kubenswrapper[4729]: I0127 07:01:32.423705 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-57z7c" podStartSLOduration=2.344720831 podStartE2EDuration="2.423682347s" podCreationTimestamp="2026-01-27 07:01:30 +0000 UTC" firstStartedPulling="2026-01-27 07:01:31.398560019 +0000 UTC m=+856.465681282" lastFinishedPulling="2026-01-27 07:01:31.477521525 +0000 UTC m=+856.544642798" observedRunningTime="2026-01-27 07:01:32.42250423 +0000 UTC m=+857.489625523" watchObservedRunningTime="2026-01-27 07:01:32.423682347 +0000 UTC m=+857.490803650" Jan 27 07:01:32 crc kubenswrapper[4729]: I0127 07:01:32.645123 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gt2zd" Jan 27 07:01:41 crc kubenswrapper[4729]: I0127 07:01:41.068719 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-57z7c" Jan 27 07:01:41 crc kubenswrapper[4729]: I0127 07:01:41.069288 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-57z7c" Jan 27 07:01:41 crc kubenswrapper[4729]: I0127 07:01:41.115197 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-57z7c" Jan 27 07:01:41 crc kubenswrapper[4729]: I0127 07:01:41.508117 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-57z7c" Jan 27 07:01:42 crc kubenswrapper[4729]: I0127 07:01:42.965242 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c"] Jan 27 07:01:42 crc kubenswrapper[4729]: E0127 07:01:42.965762 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283c049a-2816-4d86-b542-dda05086cfd4" containerName="registry-server" Jan 27 07:01:42 crc kubenswrapper[4729]: I0127 07:01:42.965778 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="283c049a-2816-4d86-b542-dda05086cfd4" containerName="registry-server" Jan 27 07:01:42 crc kubenswrapper[4729]: I0127 07:01:42.965912 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="283c049a-2816-4d86-b542-dda05086cfd4" containerName="registry-server" Jan 27 07:01:42 crc kubenswrapper[4729]: I0127 07:01:42.966871 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:42 crc kubenswrapper[4729]: I0127 07:01:42.975215 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j2xwq" Jan 27 07:01:42 crc kubenswrapper[4729]: I0127 07:01:42.994561 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c"] Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.091693 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxhkj\" (UniqueName: \"kubernetes.io/projected/418934a2-192f-4722-a381-111040d505b7-kube-api-access-pxhkj\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.091750 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-bundle\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.092036 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-util\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.193606 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-util\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.193735 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxhkj\" (UniqueName: \"kubernetes.io/projected/418934a2-192f-4722-a381-111040d505b7-kube-api-access-pxhkj\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.193798 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-bundle\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.194167 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-util\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.194639 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-bundle\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.224019 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxhkj\" (UniqueName: \"kubernetes.io/projected/418934a2-192f-4722-a381-111040d505b7-kube-api-access-pxhkj\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.286305 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:43 crc kubenswrapper[4729]: I0127 07:01:43.723105 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c"] Jan 27 07:01:43 crc kubenswrapper[4729]: W0127 07:01:43.728804 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod418934a2_192f_4722_a381_111040d505b7.slice/crio-fe570513a0f01480a803b8c3e2d8bd580597bbbcd74ae586cd553a2743d39164 WatchSource:0}: Error finding container fe570513a0f01480a803b8c3e2d8bd580597bbbcd74ae586cd553a2743d39164: Status 404 returned error can't find the container with id fe570513a0f01480a803b8c3e2d8bd580597bbbcd74ae586cd553a2743d39164 Jan 27 07:01:44 crc kubenswrapper[4729]: I0127 07:01:44.501415 4729 generic.go:334] "Generic (PLEG): container finished" podID="418934a2-192f-4722-a381-111040d505b7" containerID="4faf18d68cc568ee3a087065799dc4bee2d2c77680da64fe331789696d74d8c8" exitCode=0 Jan 27 07:01:44 crc kubenswrapper[4729]: I0127 07:01:44.501454 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" event={"ID":"418934a2-192f-4722-a381-111040d505b7","Type":"ContainerDied","Data":"4faf18d68cc568ee3a087065799dc4bee2d2c77680da64fe331789696d74d8c8"} Jan 27 07:01:44 crc kubenswrapper[4729]: I0127 07:01:44.501487 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" event={"ID":"418934a2-192f-4722-a381-111040d505b7","Type":"ContainerStarted","Data":"fe570513a0f01480a803b8c3e2d8bd580597bbbcd74ae586cd553a2743d39164"} Jan 27 07:01:46 crc kubenswrapper[4729]: I0127 07:01:46.536217 4729 generic.go:334] "Generic (PLEG): container finished" podID="418934a2-192f-4722-a381-111040d505b7" containerID="e2a30be9adecc4846e22886b7c36f81cc4f466e23431fce37482ab5c2d08f517" exitCode=0 Jan 27 07:01:46 crc kubenswrapper[4729]: I0127 07:01:46.536277 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" event={"ID":"418934a2-192f-4722-a381-111040d505b7","Type":"ContainerDied","Data":"e2a30be9adecc4846e22886b7c36f81cc4f466e23431fce37482ab5c2d08f517"} Jan 27 07:01:47 crc kubenswrapper[4729]: I0127 07:01:47.544116 4729 generic.go:334] "Generic (PLEG): container finished" podID="418934a2-192f-4722-a381-111040d505b7" containerID="56668912bdf5d9f311a2b7dbae8f30866c31032dad5e65d36a728157339ce93f" exitCode=0 Jan 27 07:01:47 crc kubenswrapper[4729]: I0127 07:01:47.544157 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" event={"ID":"418934a2-192f-4722-a381-111040d505b7","Type":"ContainerDied","Data":"56668912bdf5d9f311a2b7dbae8f30866c31032dad5e65d36a728157339ce93f"} Jan 27 07:01:48 crc kubenswrapper[4729]: I0127 07:01:48.866023 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:48 crc kubenswrapper[4729]: I0127 07:01:48.986450 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-bundle\") pod \"418934a2-192f-4722-a381-111040d505b7\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " Jan 27 07:01:48 crc kubenswrapper[4729]: I0127 07:01:48.986579 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxhkj\" (UniqueName: \"kubernetes.io/projected/418934a2-192f-4722-a381-111040d505b7-kube-api-access-pxhkj\") pod \"418934a2-192f-4722-a381-111040d505b7\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " Jan 27 07:01:48 crc kubenswrapper[4729]: I0127 07:01:48.986615 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-util\") pod \"418934a2-192f-4722-a381-111040d505b7\" (UID: \"418934a2-192f-4722-a381-111040d505b7\") " Jan 27 07:01:48 crc kubenswrapper[4729]: I0127 07:01:48.987022 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-bundle" (OuterVolumeSpecName: "bundle") pod "418934a2-192f-4722-a381-111040d505b7" (UID: "418934a2-192f-4722-a381-111040d505b7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:01:48 crc kubenswrapper[4729]: I0127 07:01:48.993444 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418934a2-192f-4722-a381-111040d505b7-kube-api-access-pxhkj" (OuterVolumeSpecName: "kube-api-access-pxhkj") pod "418934a2-192f-4722-a381-111040d505b7" (UID: "418934a2-192f-4722-a381-111040d505b7"). InnerVolumeSpecName "kube-api-access-pxhkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:01:49 crc kubenswrapper[4729]: I0127 07:01:49.088428 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxhkj\" (UniqueName: \"kubernetes.io/projected/418934a2-192f-4722-a381-111040d505b7-kube-api-access-pxhkj\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:49 crc kubenswrapper[4729]: I0127 07:01:49.088462 4729 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:49 crc kubenswrapper[4729]: I0127 07:01:49.262619 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-util" (OuterVolumeSpecName: "util") pod "418934a2-192f-4722-a381-111040d505b7" (UID: "418934a2-192f-4722-a381-111040d505b7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:01:49 crc kubenswrapper[4729]: I0127 07:01:49.291928 4729 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418934a2-192f-4722-a381-111040d505b7-util\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:49 crc kubenswrapper[4729]: I0127 07:01:49.560263 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" event={"ID":"418934a2-192f-4722-a381-111040d505b7","Type":"ContainerDied","Data":"fe570513a0f01480a803b8c3e2d8bd580597bbbcd74ae586cd553a2743d39164"} Jan 27 07:01:49 crc kubenswrapper[4729]: I0127 07:01:49.560332 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe570513a0f01480a803b8c3e2d8bd580597bbbcd74ae586cd553a2743d39164" Jan 27 07:01:49 crc kubenswrapper[4729]: I0127 07:01:49.560437 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.157488 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28"] Jan 27 07:01:55 crc kubenswrapper[4729]: E0127 07:01:55.158364 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418934a2-192f-4722-a381-111040d505b7" containerName="util" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.158386 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="418934a2-192f-4722-a381-111040d505b7" containerName="util" Jan 27 07:01:55 crc kubenswrapper[4729]: E0127 07:01:55.158429 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418934a2-192f-4722-a381-111040d505b7" containerName="pull" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.158442 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="418934a2-192f-4722-a381-111040d505b7" containerName="pull" Jan 27 07:01:55 crc kubenswrapper[4729]: E0127 07:01:55.158454 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418934a2-192f-4722-a381-111040d505b7" containerName="extract" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.158467 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="418934a2-192f-4722-a381-111040d505b7" containerName="extract" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.158645 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="418934a2-192f-4722-a381-111040d505b7" containerName="extract" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.159294 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.166290 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-v9gm4" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.193159 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28"] Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.267927 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfvb\" (UniqueName: \"kubernetes.io/projected/1de6892f-5b4c-4722-be63-3e35853e6b20-kube-api-access-trfvb\") pod \"openstack-operator-controller-init-5c58fc478-hdj28\" (UID: \"1de6892f-5b4c-4722-be63-3e35853e6b20\") " pod="openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.369305 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trfvb\" (UniqueName: \"kubernetes.io/projected/1de6892f-5b4c-4722-be63-3e35853e6b20-kube-api-access-trfvb\") pod \"openstack-operator-controller-init-5c58fc478-hdj28\" (UID: \"1de6892f-5b4c-4722-be63-3e35853e6b20\") " pod="openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.389212 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trfvb\" (UniqueName: \"kubernetes.io/projected/1de6892f-5b4c-4722-be63-3e35853e6b20-kube-api-access-trfvb\") pod \"openstack-operator-controller-init-5c58fc478-hdj28\" (UID: \"1de6892f-5b4c-4722-be63-3e35853e6b20\") " pod="openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.479323 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28" Jan 27 07:01:55 crc kubenswrapper[4729]: I0127 07:01:55.929141 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28"] Jan 27 07:01:56 crc kubenswrapper[4729]: I0127 07:01:56.620342 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28" event={"ID":"1de6892f-5b4c-4722-be63-3e35853e6b20","Type":"ContainerStarted","Data":"07c5e7cefa8de7819a770940c2c2caf0791963338090ad62602ab749cb5c5ed9"} Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.021286 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8s56l"] Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.022446 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.070445 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-utilities\") pod \"community-operators-8s56l\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.070513 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhhtw\" (UniqueName: \"kubernetes.io/projected/017d7c8e-cc15-4581-95b7-68e78e26c9d9-kube-api-access-fhhtw\") pod \"community-operators-8s56l\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.070614 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-catalog-content\") pod \"community-operators-8s56l\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.076889 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8s56l"] Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.171518 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-catalog-content\") pod \"community-operators-8s56l\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.171589 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-utilities\") pod \"community-operators-8s56l\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.171614 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhhtw\" (UniqueName: \"kubernetes.io/projected/017d7c8e-cc15-4581-95b7-68e78e26c9d9-kube-api-access-fhhtw\") pod \"community-operators-8s56l\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.172568 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-catalog-content\") pod \"community-operators-8s56l\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.172731 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-utilities\") pod \"community-operators-8s56l\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.188622 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhhtw\" (UniqueName: \"kubernetes.io/projected/017d7c8e-cc15-4581-95b7-68e78e26c9d9-kube-api-access-fhhtw\") pod \"community-operators-8s56l\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:00 crc kubenswrapper[4729]: I0127 07:02:00.387873 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:01 crc kubenswrapper[4729]: I0127 07:02:01.087595 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:02:01 crc kubenswrapper[4729]: I0127 07:02:01.087657 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:02:02 crc kubenswrapper[4729]: I0127 07:02:02.588694 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8s56l"] Jan 27 07:02:02 crc kubenswrapper[4729]: W0127 07:02:02.594084 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod017d7c8e_cc15_4581_95b7_68e78e26c9d9.slice/crio-e9bc50cc440e4e8658b54aa79cb118f7c6ffa12ba1253b425d978d5e8a9aafe6 WatchSource:0}: Error finding container e9bc50cc440e4e8658b54aa79cb118f7c6ffa12ba1253b425d978d5e8a9aafe6: Status 404 returned error can't find the container with id e9bc50cc440e4e8658b54aa79cb118f7c6ffa12ba1253b425d978d5e8a9aafe6 Jan 27 07:02:02 crc kubenswrapper[4729]: I0127 07:02:02.679973 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8s56l" event={"ID":"017d7c8e-cc15-4581-95b7-68e78e26c9d9","Type":"ContainerStarted","Data":"e9bc50cc440e4e8658b54aa79cb118f7c6ffa12ba1253b425d978d5e8a9aafe6"} Jan 27 07:02:02 crc kubenswrapper[4729]: I0127 07:02:02.682116 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28" event={"ID":"1de6892f-5b4c-4722-be63-3e35853e6b20","Type":"ContainerStarted","Data":"093204ce0d5fa898f5eb8aef1f8c7e3185c5de551321b723f4b0a8b32f368229"} Jan 27 07:02:02 crc kubenswrapper[4729]: I0127 07:02:02.682358 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28" Jan 27 07:02:02 crc kubenswrapper[4729]: I0127 07:02:02.730536 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28" podStartSLOduration=1.238895951 podStartE2EDuration="7.730518401s" podCreationTimestamp="2026-01-27 07:01:55 +0000 UTC" firstStartedPulling="2026-01-27 07:01:55.946253642 +0000 UTC m=+881.013374905" lastFinishedPulling="2026-01-27 07:02:02.437876092 +0000 UTC m=+887.504997355" observedRunningTime="2026-01-27 07:02:02.724985075 +0000 UTC m=+887.792106348" watchObservedRunningTime="2026-01-27 07:02:02.730518401 +0000 UTC m=+887.797639664" Jan 27 07:02:03 crc kubenswrapper[4729]: I0127 07:02:03.690007 4729 generic.go:334] "Generic (PLEG): container finished" podID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerID="acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b" exitCode=0 Jan 27 07:02:03 crc kubenswrapper[4729]: I0127 07:02:03.690091 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8s56l" event={"ID":"017d7c8e-cc15-4581-95b7-68e78e26c9d9","Type":"ContainerDied","Data":"acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b"} Jan 27 07:02:04 crc kubenswrapper[4729]: I0127 07:02:04.697009 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8s56l" event={"ID":"017d7c8e-cc15-4581-95b7-68e78e26c9d9","Type":"ContainerStarted","Data":"75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41"} Jan 27 07:02:05 crc kubenswrapper[4729]: I0127 07:02:05.707404 4729 generic.go:334] "Generic (PLEG): container finished" podID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerID="75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41" exitCode=0 Jan 27 07:02:05 crc kubenswrapper[4729]: I0127 07:02:05.707449 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8s56l" event={"ID":"017d7c8e-cc15-4581-95b7-68e78e26c9d9","Type":"ContainerDied","Data":"75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41"} Jan 27 07:02:07 crc kubenswrapper[4729]: I0127 07:02:07.731657 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8s56l" event={"ID":"017d7c8e-cc15-4581-95b7-68e78e26c9d9","Type":"ContainerStarted","Data":"80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6"} Jan 27 07:02:07 crc kubenswrapper[4729]: I0127 07:02:07.754628 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8s56l" podStartSLOduration=4.847411293 podStartE2EDuration="7.754612431s" podCreationTimestamp="2026-01-27 07:02:00 +0000 UTC" firstStartedPulling="2026-01-27 07:02:03.691706329 +0000 UTC m=+888.758827602" lastFinishedPulling="2026-01-27 07:02:06.598907477 +0000 UTC m=+891.666028740" observedRunningTime="2026-01-27 07:02:07.752967159 +0000 UTC m=+892.820088422" watchObservedRunningTime="2026-01-27 07:02:07.754612431 +0000 UTC m=+892.821733694" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.426331 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bp477"] Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.427738 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.451582 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bp477"] Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.595429 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-utilities\") pod \"redhat-marketplace-bp477\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.595470 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shskt\" (UniqueName: \"kubernetes.io/projected/11808dfe-f8b5-4e83-aa06-9d693d02390f-kube-api-access-shskt\") pod \"redhat-marketplace-bp477\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.595500 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-catalog-content\") pod \"redhat-marketplace-bp477\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.696444 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-utilities\") pod \"redhat-marketplace-bp477\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.696481 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shskt\" (UniqueName: \"kubernetes.io/projected/11808dfe-f8b5-4e83-aa06-9d693d02390f-kube-api-access-shskt\") pod \"redhat-marketplace-bp477\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.696510 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-catalog-content\") pod \"redhat-marketplace-bp477\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.696917 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-catalog-content\") pod \"redhat-marketplace-bp477\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.696995 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-utilities\") pod \"redhat-marketplace-bp477\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.718285 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shskt\" (UniqueName: \"kubernetes.io/projected/11808dfe-f8b5-4e83-aa06-9d693d02390f-kube-api-access-shskt\") pod \"redhat-marketplace-bp477\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.748032 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:08 crc kubenswrapper[4729]: I0127 07:02:08.998595 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bp477"] Jan 27 07:02:09 crc kubenswrapper[4729]: W0127 07:02:09.019393 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11808dfe_f8b5_4e83_aa06_9d693d02390f.slice/crio-083afce88dc950c7fa73edfeec3fd93f7d9196ff25ce4d7ddb6299b7e9c1edcb WatchSource:0}: Error finding container 083afce88dc950c7fa73edfeec3fd93f7d9196ff25ce4d7ddb6299b7e9c1edcb: Status 404 returned error can't find the container with id 083afce88dc950c7fa73edfeec3fd93f7d9196ff25ce4d7ddb6299b7e9c1edcb Jan 27 07:02:09 crc kubenswrapper[4729]: I0127 07:02:09.746879 4729 generic.go:334] "Generic (PLEG): container finished" podID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerID="8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab" exitCode=0 Jan 27 07:02:09 crc kubenswrapper[4729]: I0127 07:02:09.746996 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp477" event={"ID":"11808dfe-f8b5-4e83-aa06-9d693d02390f","Type":"ContainerDied","Data":"8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab"} Jan 27 07:02:09 crc kubenswrapper[4729]: I0127 07:02:09.747261 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp477" event={"ID":"11808dfe-f8b5-4e83-aa06-9d693d02390f","Type":"ContainerStarted","Data":"083afce88dc950c7fa73edfeec3fd93f7d9196ff25ce4d7ddb6299b7e9c1edcb"} Jan 27 07:02:10 crc kubenswrapper[4729]: I0127 07:02:10.391191 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:10 crc kubenswrapper[4729]: I0127 07:02:10.391484 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:10 crc kubenswrapper[4729]: I0127 07:02:10.452014 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:10 crc kubenswrapper[4729]: I0127 07:02:10.754347 4729 generic.go:334] "Generic (PLEG): container finished" podID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerID="9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4" exitCode=0 Jan 27 07:02:10 crc kubenswrapper[4729]: I0127 07:02:10.755206 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp477" event={"ID":"11808dfe-f8b5-4e83-aa06-9d693d02390f","Type":"ContainerDied","Data":"9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4"} Jan 27 07:02:11 crc kubenswrapper[4729]: I0127 07:02:11.764107 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp477" event={"ID":"11808dfe-f8b5-4e83-aa06-9d693d02390f","Type":"ContainerStarted","Data":"48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312"} Jan 27 07:02:11 crc kubenswrapper[4729]: I0127 07:02:11.783680 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bp477" podStartSLOduration=2.299963903 podStartE2EDuration="3.783659937s" podCreationTimestamp="2026-01-27 07:02:08 +0000 UTC" firstStartedPulling="2026-01-27 07:02:09.748591312 +0000 UTC m=+894.815712575" lastFinishedPulling="2026-01-27 07:02:11.232287346 +0000 UTC m=+896.299408609" observedRunningTime="2026-01-27 07:02:11.782045276 +0000 UTC m=+896.849166549" watchObservedRunningTime="2026-01-27 07:02:11.783659937 +0000 UTC m=+896.850781200" Jan 27 07:02:15 crc kubenswrapper[4729]: I0127 07:02:15.483022 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-hdj28" Jan 27 07:02:18 crc kubenswrapper[4729]: I0127 07:02:18.749140 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:18 crc kubenswrapper[4729]: I0127 07:02:18.749529 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:18 crc kubenswrapper[4729]: I0127 07:02:18.815714 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:18 crc kubenswrapper[4729]: I0127 07:02:18.873988 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:19 crc kubenswrapper[4729]: I0127 07:02:19.062899 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bp477"] Jan 27 07:02:20 crc kubenswrapper[4729]: I0127 07:02:20.511545 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:20 crc kubenswrapper[4729]: I0127 07:02:20.823313 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bp477" podUID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerName="registry-server" containerID="cri-o://48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312" gracePeriod=2 Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.421377 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.452872 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8s56l"] Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.453122 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8s56l" podUID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerName="registry-server" containerID="cri-o://80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6" gracePeriod=2 Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.471180 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-utilities\") pod \"11808dfe-f8b5-4e83-aa06-9d693d02390f\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.471257 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-catalog-content\") pod \"11808dfe-f8b5-4e83-aa06-9d693d02390f\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.471280 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shskt\" (UniqueName: \"kubernetes.io/projected/11808dfe-f8b5-4e83-aa06-9d693d02390f-kube-api-access-shskt\") pod \"11808dfe-f8b5-4e83-aa06-9d693d02390f\" (UID: \"11808dfe-f8b5-4e83-aa06-9d693d02390f\") " Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.472118 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-utilities" (OuterVolumeSpecName: "utilities") pod "11808dfe-f8b5-4e83-aa06-9d693d02390f" (UID: "11808dfe-f8b5-4e83-aa06-9d693d02390f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.478201 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11808dfe-f8b5-4e83-aa06-9d693d02390f-kube-api-access-shskt" (OuterVolumeSpecName: "kube-api-access-shskt") pod "11808dfe-f8b5-4e83-aa06-9d693d02390f" (UID: "11808dfe-f8b5-4e83-aa06-9d693d02390f"). InnerVolumeSpecName "kube-api-access-shskt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.503137 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11808dfe-f8b5-4e83-aa06-9d693d02390f" (UID: "11808dfe-f8b5-4e83-aa06-9d693d02390f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.572289 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.572342 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11808dfe-f8b5-4e83-aa06-9d693d02390f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.572354 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shskt\" (UniqueName: \"kubernetes.io/projected/11808dfe-f8b5-4e83-aa06-9d693d02390f-kube-api-access-shskt\") on node \"crc\" DevicePath \"\"" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.817455 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.829629 4729 generic.go:334] "Generic (PLEG): container finished" podID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerID="48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312" exitCode=0 Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.829704 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bp477" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.829694 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp477" event={"ID":"11808dfe-f8b5-4e83-aa06-9d693d02390f","Type":"ContainerDied","Data":"48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312"} Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.829839 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp477" event={"ID":"11808dfe-f8b5-4e83-aa06-9d693d02390f","Type":"ContainerDied","Data":"083afce88dc950c7fa73edfeec3fd93f7d9196ff25ce4d7ddb6299b7e9c1edcb"} Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.829882 4729 scope.go:117] "RemoveContainer" containerID="48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.832800 4729 generic.go:334] "Generic (PLEG): container finished" podID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerID="80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6" exitCode=0 Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.832829 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8s56l" event={"ID":"017d7c8e-cc15-4581-95b7-68e78e26c9d9","Type":"ContainerDied","Data":"80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6"} Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.832855 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8s56l" event={"ID":"017d7c8e-cc15-4581-95b7-68e78e26c9d9","Type":"ContainerDied","Data":"e9bc50cc440e4e8658b54aa79cb118f7c6ffa12ba1253b425d978d5e8a9aafe6"} Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.832857 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8s56l" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.845514 4729 scope.go:117] "RemoveContainer" containerID="9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.881624 4729 scope.go:117] "RemoveContainer" containerID="8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.881935 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhhtw\" (UniqueName: \"kubernetes.io/projected/017d7c8e-cc15-4581-95b7-68e78e26c9d9-kube-api-access-fhhtw\") pod \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.882106 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-catalog-content\") pod \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.882151 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-utilities\") pod \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\" (UID: \"017d7c8e-cc15-4581-95b7-68e78e26c9d9\") " Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.883336 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-utilities" (OuterVolumeSpecName: "utilities") pod "017d7c8e-cc15-4581-95b7-68e78e26c9d9" (UID: "017d7c8e-cc15-4581-95b7-68e78e26c9d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.885203 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017d7c8e-cc15-4581-95b7-68e78e26c9d9-kube-api-access-fhhtw" (OuterVolumeSpecName: "kube-api-access-fhhtw") pod "017d7c8e-cc15-4581-95b7-68e78e26c9d9" (UID: "017d7c8e-cc15-4581-95b7-68e78e26c9d9"). InnerVolumeSpecName "kube-api-access-fhhtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.893247 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bp477"] Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.894130 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bp477"] Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.897932 4729 scope.go:117] "RemoveContainer" containerID="48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312" Jan 27 07:02:21 crc kubenswrapper[4729]: E0127 07:02:21.898385 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312\": container with ID starting with 48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312 not found: ID does not exist" containerID="48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.898434 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312"} err="failed to get container status \"48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312\": rpc error: code = NotFound desc = could not find container \"48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312\": container with ID starting with 48d587d6dd42e44e4f09aee8071bc95165caa2a88b3186fd63e2cb835d3f5312 not found: ID does not exist" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.898463 4729 scope.go:117] "RemoveContainer" containerID="9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4" Jan 27 07:02:21 crc kubenswrapper[4729]: E0127 07:02:21.903581 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4\": container with ID starting with 9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4 not found: ID does not exist" containerID="9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.903618 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4"} err="failed to get container status \"9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4\": rpc error: code = NotFound desc = could not find container \"9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4\": container with ID starting with 9b62a513b5cbe5ab2f9215dc595dbb41bd823c0f377e58bbace0801f8427edf4 not found: ID does not exist" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.903634 4729 scope.go:117] "RemoveContainer" containerID="8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab" Jan 27 07:02:21 crc kubenswrapper[4729]: E0127 07:02:21.903860 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab\": container with ID starting with 8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab not found: ID does not exist" containerID="8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.903890 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab"} err="failed to get container status \"8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab\": rpc error: code = NotFound desc = could not find container \"8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab\": container with ID starting with 8e9e40eadfbaba7d3aaf56104bd0ced3f3e00e504e1b3a20fdba1fcd0f0aa0ab not found: ID does not exist" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.903911 4729 scope.go:117] "RemoveContainer" containerID="80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.916182 4729 scope.go:117] "RemoveContainer" containerID="75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.942130 4729 scope.go:117] "RemoveContainer" containerID="acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.949722 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "017d7c8e-cc15-4581-95b7-68e78e26c9d9" (UID: "017d7c8e-cc15-4581-95b7-68e78e26c9d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.968943 4729 scope.go:117] "RemoveContainer" containerID="80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6" Jan 27 07:02:21 crc kubenswrapper[4729]: E0127 07:02:21.969444 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6\": container with ID starting with 80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6 not found: ID does not exist" containerID="80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.969490 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6"} err="failed to get container status \"80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6\": rpc error: code = NotFound desc = could not find container \"80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6\": container with ID starting with 80840a2fdde1485a10b1a7ab055402ef39044c07064f60cc5ad988d84b2038d6 not found: ID does not exist" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.969523 4729 scope.go:117] "RemoveContainer" containerID="75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41" Jan 27 07:02:21 crc kubenswrapper[4729]: E0127 07:02:21.969797 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41\": container with ID starting with 75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41 not found: ID does not exist" containerID="75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.969828 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41"} err="failed to get container status \"75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41\": rpc error: code = NotFound desc = could not find container \"75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41\": container with ID starting with 75ee3b0693d48e08878a3326643f0d5e5e976ec2e76b9e6da8d009014a560a41 not found: ID does not exist" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.969852 4729 scope.go:117] "RemoveContainer" containerID="acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b" Jan 27 07:02:21 crc kubenswrapper[4729]: E0127 07:02:21.970126 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b\": container with ID starting with acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b not found: ID does not exist" containerID="acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.970147 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b"} err="failed to get container status \"acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b\": rpc error: code = NotFound desc = could not find container \"acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b\": container with ID starting with acebf289e2bbb12b911f0109c1898dda51d93726d8b43dab44c907cf5b158b2b not found: ID does not exist" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.986884 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.986928 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/017d7c8e-cc15-4581-95b7-68e78e26c9d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:02:21 crc kubenswrapper[4729]: I0127 07:02:21.986941 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhhtw\" (UniqueName: \"kubernetes.io/projected/017d7c8e-cc15-4581-95b7-68e78e26c9d9-kube-api-access-fhhtw\") on node \"crc\" DevicePath \"\"" Jan 27 07:02:22 crc kubenswrapper[4729]: I0127 07:02:22.159984 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8s56l"] Jan 27 07:02:22 crc kubenswrapper[4729]: I0127 07:02:22.163411 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8s56l"] Jan 27 07:02:22 crc kubenswrapper[4729]: I0127 07:02:22.370022 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" path="/var/lib/kubelet/pods/017d7c8e-cc15-4581-95b7-68e78e26c9d9/volumes" Jan 27 07:02:22 crc kubenswrapper[4729]: I0127 07:02:22.370592 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11808dfe-f8b5-4e83-aa06-9d693d02390f" path="/var/lib/kubelet/pods/11808dfe-f8b5-4e83-aa06-9d693d02390f/volumes" Jan 27 07:02:31 crc kubenswrapper[4729]: I0127 07:02:31.093011 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:02:31 crc kubenswrapper[4729]: I0127 07:02:31.093468 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.373400 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg"] Jan 27 07:02:33 crc kubenswrapper[4729]: E0127 07:02:33.373876 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerName="extract-utilities" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.373891 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerName="extract-utilities" Jan 27 07:02:33 crc kubenswrapper[4729]: E0127 07:02:33.373899 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerName="registry-server" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.373906 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerName="registry-server" Jan 27 07:02:33 crc kubenswrapper[4729]: E0127 07:02:33.373920 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerName="extract-content" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.373928 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerName="extract-content" Jan 27 07:02:33 crc kubenswrapper[4729]: E0127 07:02:33.373938 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerName="extract-utilities" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.373944 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerName="extract-utilities" Jan 27 07:02:33 crc kubenswrapper[4729]: E0127 07:02:33.373968 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerName="registry-server" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.373975 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerName="registry-server" Jan 27 07:02:33 crc kubenswrapper[4729]: E0127 07:02:33.373988 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerName="extract-content" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.373997 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerName="extract-content" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.374142 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="11808dfe-f8b5-4e83-aa06-9d693d02390f" containerName="registry-server" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.374155 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="017d7c8e-cc15-4581-95b7-68e78e26c9d9" containerName="registry-server" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.374593 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.380342 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-nplqm" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.391357 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.400880 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.401991 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.410490 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-phhm4" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.435163 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.437053 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.439485 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-2kwx2" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.442570 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.450522 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.451345 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.459724 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ngl7l" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.465489 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.470946 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.508369 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.510627 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.514398 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-czm4p" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.533082 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qq9b\" (UniqueName: \"kubernetes.io/projected/8fae3d38-94af-48f2-a91d-b465752c4d15-kube-api-access-6qq9b\") pod \"cinder-operator-controller-manager-655bf9cfbb-t6jms\" (UID: \"8fae3d38-94af-48f2-a91d-b465752c4d15\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.533130 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddmn\" (UniqueName: \"kubernetes.io/projected/68e763f0-9028-45f1-9a58-c19c5b0189ff-kube-api-access-qddmn\") pod \"glance-operator-controller-manager-67dd55ff59-vtpvf\" (UID: \"68e763f0-9028-45f1-9a58-c19c5b0189ff\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.534258 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9d8t\" (UniqueName: \"kubernetes.io/projected/f8bab3ea-2957-43a7-894e-25c3d7b54287-kube-api-access-v9d8t\") pod \"barbican-operator-controller-manager-65ff799cfd-2mstg\" (UID: \"f8bab3ea-2957-43a7-894e-25c3d7b54287\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.534326 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxdp\" (UniqueName: \"kubernetes.io/projected/841f8e61-a7f0-4cd6-9554-bbcd391a9431-kube-api-access-pnxdp\") pod \"designate-operator-controller-manager-77554cdc5c-5drlv\" (UID: \"841f8e61-a7f0-4cd6-9554-bbcd391a9431\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.534400 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpfm2\" (UniqueName: \"kubernetes.io/projected/5504ffe5-4dd8-4ee6-a3d1-cd609ef82770-kube-api-access-cpfm2\") pod \"heat-operator-controller-manager-74866cc64d-ggjqc\" (UID: \"5504ffe5-4dd8-4ee6-a3d1-cd609ef82770\") " pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.541509 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.542502 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.546494 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-q2lk7" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.601175 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.607419 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.608142 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.609747 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7r8cl" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.610980 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.641147 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.641232 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpfm2\" (UniqueName: \"kubernetes.io/projected/5504ffe5-4dd8-4ee6-a3d1-cd609ef82770-kube-api-access-cpfm2\") pod \"heat-operator-controller-manager-74866cc64d-ggjqc\" (UID: \"5504ffe5-4dd8-4ee6-a3d1-cd609ef82770\") " pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.641294 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qq9b\" (UniqueName: \"kubernetes.io/projected/8fae3d38-94af-48f2-a91d-b465752c4d15-kube-api-access-6qq9b\") pod \"cinder-operator-controller-manager-655bf9cfbb-t6jms\" (UID: \"8fae3d38-94af-48f2-a91d-b465752c4d15\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.641315 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qddmn\" (UniqueName: \"kubernetes.io/projected/68e763f0-9028-45f1-9a58-c19c5b0189ff-kube-api-access-qddmn\") pod \"glance-operator-controller-manager-67dd55ff59-vtpvf\" (UID: \"68e763f0-9028-45f1-9a58-c19c5b0189ff\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.641360 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9d8t\" (UniqueName: \"kubernetes.io/projected/f8bab3ea-2957-43a7-894e-25c3d7b54287-kube-api-access-v9d8t\") pod \"barbican-operator-controller-manager-65ff799cfd-2mstg\" (UID: \"f8bab3ea-2957-43a7-894e-25c3d7b54287\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.641399 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxdp\" (UniqueName: \"kubernetes.io/projected/841f8e61-a7f0-4cd6-9554-bbcd391a9431-kube-api-access-pnxdp\") pod \"designate-operator-controller-manager-77554cdc5c-5drlv\" (UID: \"841f8e61-a7f0-4cd6-9554-bbcd391a9431\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.648908 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.661967 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.663249 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.668352 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zzgft" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.710200 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qq9b\" (UniqueName: \"kubernetes.io/projected/8fae3d38-94af-48f2-a91d-b465752c4d15-kube-api-access-6qq9b\") pod \"cinder-operator-controller-manager-655bf9cfbb-t6jms\" (UID: \"8fae3d38-94af-48f2-a91d-b465752c4d15\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.710220 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qddmn\" (UniqueName: \"kubernetes.io/projected/68e763f0-9028-45f1-9a58-c19c5b0189ff-kube-api-access-qddmn\") pod \"glance-operator-controller-manager-67dd55ff59-vtpvf\" (UID: \"68e763f0-9028-45f1-9a58-c19c5b0189ff\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.712408 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxdp\" (UniqueName: \"kubernetes.io/projected/841f8e61-a7f0-4cd6-9554-bbcd391a9431-kube-api-access-pnxdp\") pod \"designate-operator-controller-manager-77554cdc5c-5drlv\" (UID: \"841f8e61-a7f0-4cd6-9554-bbcd391a9431\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.712602 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.713435 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.715001 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpfm2\" (UniqueName: \"kubernetes.io/projected/5504ffe5-4dd8-4ee6-a3d1-cd609ef82770-kube-api-access-cpfm2\") pod \"heat-operator-controller-manager-74866cc64d-ggjqc\" (UID: \"5504ffe5-4dd8-4ee6-a3d1-cd609ef82770\") " pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.715646 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-9v7rf" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.718320 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.728872 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.730638 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9d8t\" (UniqueName: \"kubernetes.io/projected/f8bab3ea-2957-43a7-894e-25c3d7b54287-kube-api-access-v9d8t\") pod \"barbican-operator-controller-manager-65ff799cfd-2mstg\" (UID: \"f8bab3ea-2957-43a7-894e-25c3d7b54287\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.743947 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.744052 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzqr\" (UniqueName: \"kubernetes.io/projected/3413118b-46c0-420a-b381-b2fb3c3e7d5f-kube-api-access-hqzqr\") pod \"horizon-operator-controller-manager-77d5c5b54f-lhb4p\" (UID: \"3413118b-46c0-420a-b381-b2fb3c3e7d5f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.744131 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwlnz\" (UniqueName: \"kubernetes.io/projected/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-kube-api-access-jwlnz\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.757196 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.764126 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.764880 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.765966 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.776219 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.777043 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.783201 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.798922 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.801454 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gdfvk" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.801528 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.801649 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-96x4k" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.801663 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-snprj" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.830719 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.831559 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.840990 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7hxsm" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.846175 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2lg5\" (UniqueName: \"kubernetes.io/projected/f0efecd6-a0c0-419c-a650-4e82d66a0080-kube-api-access-d2lg5\") pod \"ironic-operator-controller-manager-768b776ffb-22f24\" (UID: \"f0efecd6-a0c0-419c-a650-4e82d66a0080\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.846239 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqzqr\" (UniqueName: \"kubernetes.io/projected/3413118b-46c0-420a-b381-b2fb3c3e7d5f-kube-api-access-hqzqr\") pod \"horizon-operator-controller-manager-77d5c5b54f-lhb4p\" (UID: \"3413118b-46c0-420a-b381-b2fb3c3e7d5f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.846306 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwlnz\" (UniqueName: \"kubernetes.io/projected/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-kube-api-access-jwlnz\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.846352 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwx24\" (UniqueName: \"kubernetes.io/projected/b7167efa-455e-473c-9a89-d9a1b55f6523-kube-api-access-hwx24\") pod \"keystone-operator-controller-manager-55f684fd56-kl8s6\" (UID: \"b7167efa-455e-473c-9a89-d9a1b55f6523\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.846401 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:33 crc kubenswrapper[4729]: E0127 07:02:33.846563 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:33 crc kubenswrapper[4729]: E0127 07:02:33.846632 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert podName:411fadb3-87a6-4a23-b0a8-a6ba4f5e529c nodeName:}" failed. No retries permitted until 2026-01-27 07:02:34.346611615 +0000 UTC m=+919.413732878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert") pod "infra-operator-controller-manager-7d75bc88d5-fbmgs" (UID: "411fadb3-87a6-4a23-b0a8-a6ba4f5e529c") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.846965 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.886122 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.886882 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.889976 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-qpwpr" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.903018 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.917609 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwlnz\" (UniqueName: \"kubernetes.io/projected/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-kube-api-access-jwlnz\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.920665 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqzqr\" (UniqueName: \"kubernetes.io/projected/3413118b-46c0-420a-b381-b2fb3c3e7d5f-kube-api-access-hqzqr\") pod \"horizon-operator-controller-manager-77d5c5b54f-lhb4p\" (UID: \"3413118b-46c0-420a-b381-b2fb3c3e7d5f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.923442 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.941462 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.947718 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcsqm\" (UniqueName: \"kubernetes.io/projected/746a8b7a-6b43-4265-8766-03f4d5682468-kube-api-access-tcsqm\") pod \"neutron-operator-controller-manager-7ffd8d76d4-z8hhj\" (UID: \"746a8b7a-6b43-4265-8766-03f4d5682468\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.947773 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2lg5\" (UniqueName: \"kubernetes.io/projected/f0efecd6-a0c0-419c-a650-4e82d66a0080-kube-api-access-d2lg5\") pod \"ironic-operator-controller-manager-768b776ffb-22f24\" (UID: \"f0efecd6-a0c0-419c-a650-4e82d66a0080\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.947804 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrhnb\" (UniqueName: \"kubernetes.io/projected/7770e510-40b8-480e-a29b-f92bee72ccbb-kube-api-access-wrhnb\") pod \"manila-operator-controller-manager-849fcfbb6b-ltqnm\" (UID: \"7770e510-40b8-480e-a29b-f92bee72ccbb\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.947870 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwx24\" (UniqueName: \"kubernetes.io/projected/b7167efa-455e-473c-9a89-d9a1b55f6523-kube-api-access-hwx24\") pod \"keystone-operator-controller-manager-55f684fd56-kl8s6\" (UID: \"b7167efa-455e-473c-9a89-d9a1b55f6523\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.947942 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5g8\" (UniqueName: \"kubernetes.io/projected/2651401b-47d5-40f2-aaf2-316078ea8ab4-kube-api-access-8g5g8\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf\" (UID: \"2651401b-47d5-40f2-aaf2-316078ea8ab4\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.947963 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbcdj\" (UniqueName: \"kubernetes.io/projected/1363c91a-c57e-47da-b442-1331d4a4e4c1-kube-api-access-fbcdj\") pod \"nova-operator-controller-manager-7f54b7d6d4-dv45b\" (UID: \"1363c91a-c57e-47da-b442-1331d4a4e4c1\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.961675 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.962554 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.975543 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dkwmp" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.975728 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.984638 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2lg5\" (UniqueName: \"kubernetes.io/projected/f0efecd6-a0c0-419c-a650-4e82d66a0080-kube-api-access-d2lg5\") pod \"ironic-operator-controller-manager-768b776ffb-22f24\" (UID: \"f0efecd6-a0c0-419c-a650-4e82d66a0080\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24" Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.986122 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww"] Jan 27 07:02:33 crc kubenswrapper[4729]: I0127 07:02:33.992612 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.015750 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwx24\" (UniqueName: \"kubernetes.io/projected/b7167efa-455e-473c-9a89-d9a1b55f6523-kube-api-access-hwx24\") pod \"keystone-operator-controller-manager-55f684fd56-kl8s6\" (UID: \"b7167efa-455e-473c-9a89-d9a1b55f6523\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.036233 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.045380 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.046119 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.048412 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lgr6d" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.049405 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx7pf\" (UniqueName: \"kubernetes.io/projected/208d609e-4bd2-4e07-8247-c817513cd3f1-kube-api-access-wx7pf\") pod \"octavia-operator-controller-manager-7875d7675-sl5ww\" (UID: \"208d609e-4bd2-4e07-8247-c817513cd3f1\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.049473 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5g8\" (UniqueName: \"kubernetes.io/projected/2651401b-47d5-40f2-aaf2-316078ea8ab4-kube-api-access-8g5g8\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf\" (UID: \"2651401b-47d5-40f2-aaf2-316078ea8ab4\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.049500 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbcdj\" (UniqueName: \"kubernetes.io/projected/1363c91a-c57e-47da-b442-1331d4a4e4c1-kube-api-access-fbcdj\") pod \"nova-operator-controller-manager-7f54b7d6d4-dv45b\" (UID: \"1363c91a-c57e-47da-b442-1331d4a4e4c1\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.049519 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbbl\" (UniqueName: \"kubernetes.io/projected/2dddb667-9f1f-4fd1-a5be-c1eac664059e-kube-api-access-zzbbl\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.049542 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcsqm\" (UniqueName: \"kubernetes.io/projected/746a8b7a-6b43-4265-8766-03f4d5682468-kube-api-access-tcsqm\") pod \"neutron-operator-controller-manager-7ffd8d76d4-z8hhj\" (UID: \"746a8b7a-6b43-4265-8766-03f4d5682468\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.049568 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrhnb\" (UniqueName: \"kubernetes.io/projected/7770e510-40b8-480e-a29b-f92bee72ccbb-kube-api-access-wrhnb\") pod \"manila-operator-controller-manager-849fcfbb6b-ltqnm\" (UID: \"7770e510-40b8-480e-a29b-f92bee72ccbb\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.049599 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.056613 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.087097 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbcdj\" (UniqueName: \"kubernetes.io/projected/1363c91a-c57e-47da-b442-1331d4a4e4c1-kube-api-access-fbcdj\") pod \"nova-operator-controller-manager-7f54b7d6d4-dv45b\" (UID: \"1363c91a-c57e-47da-b442-1331d4a4e4c1\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.087619 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcsqm\" (UniqueName: \"kubernetes.io/projected/746a8b7a-6b43-4265-8766-03f4d5682468-kube-api-access-tcsqm\") pod \"neutron-operator-controller-manager-7ffd8d76d4-z8hhj\" (UID: \"746a8b7a-6b43-4265-8766-03f4d5682468\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.088095 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5g8\" (UniqueName: \"kubernetes.io/projected/2651401b-47d5-40f2-aaf2-316078ea8ab4-kube-api-access-8g5g8\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf\" (UID: \"2651401b-47d5-40f2-aaf2-316078ea8ab4\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.092656 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrhnb\" (UniqueName: \"kubernetes.io/projected/7770e510-40b8-480e-a29b-f92bee72ccbb-kube-api-access-wrhnb\") pod \"manila-operator-controller-manager-849fcfbb6b-ltqnm\" (UID: \"7770e510-40b8-480e-a29b-f92bee72ccbb\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.099374 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.136569 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.137348 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.140108 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-s6ztq" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.140892 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.143047 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.151165 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx7pf\" (UniqueName: \"kubernetes.io/projected/208d609e-4bd2-4e07-8247-c817513cd3f1-kube-api-access-wx7pf\") pod \"octavia-operator-controller-manager-7875d7675-sl5ww\" (UID: \"208d609e-4bd2-4e07-8247-c817513cd3f1\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.151234 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbbl\" (UniqueName: \"kubernetes.io/projected/2dddb667-9f1f-4fd1-a5be-c1eac664059e-kube-api-access-zzbbl\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.151297 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r6f6\" (UniqueName: \"kubernetes.io/projected/9fafbb5d-f848-4dc6-9bb4-4101924b2ab2-kube-api-access-2r6f6\") pod \"ovn-operator-controller-manager-6f75f45d54-cqxx6\" (UID: \"9fafbb5d-f848-4dc6-9bb4-4101924b2ab2\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.151331 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:34 crc kubenswrapper[4729]: E0127 07:02:34.151457 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4729]: E0127 07:02:34.151526 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert podName:2dddb667-9f1f-4fd1-a5be-c1eac664059e nodeName:}" failed. No retries permitted until 2026-01-27 07:02:34.651509923 +0000 UTC m=+919.718631186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" (UID: "2dddb667-9f1f-4fd1-a5be-c1eac664059e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.165152 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.184367 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.184840 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.187916 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbbl\" (UniqueName: \"kubernetes.io/projected/2dddb667-9f1f-4fd1-a5be-c1eac664059e-kube-api-access-zzbbl\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.198447 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx7pf\" (UniqueName: \"kubernetes.io/projected/208d609e-4bd2-4e07-8247-c817513cd3f1-kube-api-access-wx7pf\") pod \"octavia-operator-controller-manager-7875d7675-sl5ww\" (UID: \"208d609e-4bd2-4e07-8247-c817513cd3f1\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.223745 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.231224 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.232117 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.239999 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-k58m5" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.244111 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.244973 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.246686 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.247310 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-mgvgs" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.263416 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qgck\" (UniqueName: \"kubernetes.io/projected/59e3aaff-61c0-4878-bd07-8a83e07eda58-kube-api-access-6qgck\") pod \"placement-operator-controller-manager-79d5ccc684-bkkbb\" (UID: \"59e3aaff-61c0-4878-bd07-8a83e07eda58\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.263486 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6f6\" (UniqueName: \"kubernetes.io/projected/9fafbb5d-f848-4dc6-9bb4-4101924b2ab2-kube-api-access-2r6f6\") pod \"ovn-operator-controller-manager-6f75f45d54-cqxx6\" (UID: \"9fafbb5d-f848-4dc6-9bb4-4101924b2ab2\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.264457 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.265187 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.266110 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.269646 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-s9xz7" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.297369 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.341756 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.373844 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6f6\" (UniqueName: \"kubernetes.io/projected/9fafbb5d-f848-4dc6-9bb4-4101924b2ab2-kube-api-access-2r6f6\") pod \"ovn-operator-controller-manager-6f75f45d54-cqxx6\" (UID: \"9fafbb5d-f848-4dc6-9bb4-4101924b2ab2\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.386261 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qgck\" (UniqueName: \"kubernetes.io/projected/59e3aaff-61c0-4878-bd07-8a83e07eda58-kube-api-access-6qgck\") pod \"placement-operator-controller-manager-79d5ccc684-bkkbb\" (UID: \"59e3aaff-61c0-4878-bd07-8a83e07eda58\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.390586 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.392787 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljt7d\" (UniqueName: \"kubernetes.io/projected/aa0fd3d1-eec0-44e5-a28d-d009ae449ee0-kube-api-access-ljt7d\") pod \"swift-operator-controller-manager-547cbdb99f-mphcb\" (UID: \"aa0fd3d1-eec0-44e5-a28d-d009ae449ee0\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" Jan 27 07:02:34 crc kubenswrapper[4729]: E0127 07:02:34.390882 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4729]: E0127 07:02:34.400105 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert podName:411fadb3-87a6-4a23-b0a8-a6ba4f5e529c nodeName:}" failed. No retries permitted until 2026-01-27 07:02:35.400079892 +0000 UTC m=+920.467201155 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert") pod "infra-operator-controller-manager-7d75bc88d5-fbmgs" (UID: "411fadb3-87a6-4a23-b0a8-a6ba4f5e529c") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.413410 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qgck\" (UniqueName: \"kubernetes.io/projected/59e3aaff-61c0-4878-bd07-8a83e07eda58-kube-api-access-6qgck\") pod \"placement-operator-controller-manager-79d5ccc684-bkkbb\" (UID: \"59e3aaff-61c0-4878-bd07-8a83e07eda58\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.447802 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.448298 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.463965 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.465531 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.476650 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-66pbs" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.482494 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.499829 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.500550 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkww\" (UniqueName: \"kubernetes.io/projected/53cd31cd-4d1b-4d0c-8d48-1826850ed320-kube-api-access-rqkww\") pod \"test-operator-controller-manager-69797bbcbd-ngvsf\" (UID: \"53cd31cd-4d1b-4d0c-8d48-1826850ed320\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.501921 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljt7d\" (UniqueName: \"kubernetes.io/projected/aa0fd3d1-eec0-44e5-a28d-d009ae449ee0-kube-api-access-ljt7d\") pod \"swift-operator-controller-manager-547cbdb99f-mphcb\" (UID: \"aa0fd3d1-eec0-44e5-a28d-d009ae449ee0\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.501998 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d99p\" (UniqueName: \"kubernetes.io/projected/9b7c6228-376e-4dd1-a3bc-6d93e27cbf60-kube-api-access-2d99p\") pod \"telemetry-operator-controller-manager-799bc87c89-24xxm\" (UID: \"9b7c6228-376e-4dd1-a3bc-6d93e27cbf60\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.523720 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljt7d\" (UniqueName: \"kubernetes.io/projected/aa0fd3d1-eec0-44e5-a28d-d009ae449ee0-kube-api-access-ljt7d\") pod \"swift-operator-controller-manager-547cbdb99f-mphcb\" (UID: \"aa0fd3d1-eec0-44e5-a28d-d009ae449ee0\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.563250 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.567541 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.570253 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-kccb6" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.570388 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.576475 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.582773 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.604555 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqkww\" (UniqueName: \"kubernetes.io/projected/53cd31cd-4d1b-4d0c-8d48-1826850ed320-kube-api-access-rqkww\") pod \"test-operator-controller-manager-69797bbcbd-ngvsf\" (UID: \"53cd31cd-4d1b-4d0c-8d48-1826850ed320\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.604871 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8q2\" (UniqueName: \"kubernetes.io/projected/d75de196-b27a-44e6-a707-210294d056f5-kube-api-access-2r8q2\") pod \"watcher-operator-controller-manager-75db85654f-qdzzc\" (UID: \"d75de196-b27a-44e6-a707-210294d056f5\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.604961 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d99p\" (UniqueName: \"kubernetes.io/projected/9b7c6228-376e-4dd1-a3bc-6d93e27cbf60-kube-api-access-2d99p\") pod \"telemetry-operator-controller-manager-799bc87c89-24xxm\" (UID: \"9b7c6228-376e-4dd1-a3bc-6d93e27cbf60\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.609105 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.609956 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.611101 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-n2t5n" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.612811 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.626228 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d99p\" (UniqueName: \"kubernetes.io/projected/9b7c6228-376e-4dd1-a3bc-6d93e27cbf60-kube-api-access-2d99p\") pod \"telemetry-operator-controller-manager-799bc87c89-24xxm\" (UID: \"9b7c6228-376e-4dd1-a3bc-6d93e27cbf60\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.634081 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqkww\" (UniqueName: \"kubernetes.io/projected/53cd31cd-4d1b-4d0c-8d48-1826850ed320-kube-api-access-rqkww\") pod \"test-operator-controller-manager-69797bbcbd-ngvsf\" (UID: \"53cd31cd-4d1b-4d0c-8d48-1826850ed320\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.635762 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.642301 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.660736 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.678287 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.681380 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.707009 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.707045 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgx8k\" (UniqueName: \"kubernetes.io/projected/3608e6ab-1167-45e0-a01a-48f0339c45e0-kube-api-access-jgx8k\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.707063 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.707170 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:34 crc kubenswrapper[4729]: E0127 07:02:34.707634 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4729]: E0127 07:02:34.707705 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert podName:2dddb667-9f1f-4fd1-a5be-c1eac664059e nodeName:}" failed. No retries permitted until 2026-01-27 07:02:35.707683217 +0000 UTC m=+920.774804480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" (UID: "2dddb667-9f1f-4fd1-a5be-c1eac664059e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.707189 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhbq5\" (UniqueName: \"kubernetes.io/projected/3e370583-f827-4e73-a360-10ea26ad6c94-kube-api-access-vhbq5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mtbcm\" (UID: \"3e370583-f827-4e73-a360-10ea26ad6c94\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.708223 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r8q2\" (UniqueName: \"kubernetes.io/projected/d75de196-b27a-44e6-a707-210294d056f5-kube-api-access-2r8q2\") pod \"watcher-operator-controller-manager-75db85654f-qdzzc\" (UID: \"d75de196-b27a-44e6-a707-210294d056f5\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.733223 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r8q2\" (UniqueName: \"kubernetes.io/projected/d75de196-b27a-44e6-a707-210294d056f5-kube-api-access-2r8q2\") pod \"watcher-operator-controller-manager-75db85654f-qdzzc\" (UID: \"d75de196-b27a-44e6-a707-210294d056f5\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.796644 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf"] Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.800026 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.808709 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.808745 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhbq5\" (UniqueName: \"kubernetes.io/projected/3e370583-f827-4e73-a360-10ea26ad6c94-kube-api-access-vhbq5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mtbcm\" (UID: \"3e370583-f827-4e73-a360-10ea26ad6c94\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.808802 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgx8k\" (UniqueName: \"kubernetes.io/projected/3608e6ab-1167-45e0-a01a-48f0339c45e0-kube-api-access-jgx8k\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.808822 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:34 crc kubenswrapper[4729]: E0127 07:02:34.808976 4729 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4729]: E0127 07:02:34.809027 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs podName:3608e6ab-1167-45e0-a01a-48f0339c45e0 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:35.309014102 +0000 UTC m=+920.376135365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs") pod "openstack-operator-controller-manager-64d6b84b7b-qd5rn" (UID: "3608e6ab-1167-45e0-a01a-48f0339c45e0") : secret "webhook-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4729]: E0127 07:02:34.809277 4729 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4729]: E0127 07:02:34.809302 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs podName:3608e6ab-1167-45e0-a01a-48f0339c45e0 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:35.309294921 +0000 UTC m=+920.376416184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs") pod "openstack-operator-controller-manager-64d6b84b7b-qd5rn" (UID: "3608e6ab-1167-45e0-a01a-48f0339c45e0") : secret "metrics-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.843674 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgx8k\" (UniqueName: \"kubernetes.io/projected/3608e6ab-1167-45e0-a01a-48f0339c45e0-kube-api-access-jgx8k\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.844874 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhbq5\" (UniqueName: \"kubernetes.io/projected/3e370583-f827-4e73-a360-10ea26ad6c94-kube-api-access-vhbq5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mtbcm\" (UID: \"3e370583-f827-4e73-a360-10ea26ad6c94\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.864260 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.980553 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.981586 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf" event={"ID":"68e763f0-9028-45f1-9a58-c19c5b0189ff","Type":"ContainerStarted","Data":"58954b18b0945445a78bd8fa0dd18fb3bb53860aef755e19db772e30c1896c13"} Jan 27 07:02:34 crc kubenswrapper[4729]: I0127 07:02:34.990466 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv" event={"ID":"841f8e61-a7f0-4cd6-9554-bbcd391a9431","Type":"ContainerStarted","Data":"2f0fe9fe0936cc9a21b77340f7b44d5296205b5ddf6c74d139df83efbc1b4aa9"} Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.004374 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms" event={"ID":"8fae3d38-94af-48f2-a91d-b465752c4d15","Type":"ContainerStarted","Data":"5fe900abc0b34dc5ede026b8f0457424ae52589c27f779393b5b4fba64f4f8d2"} Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.086057 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm"] Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.266149 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6"] Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.272318 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b"] Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.288523 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc"] Jan 27 07:02:35 crc kubenswrapper[4729]: W0127 07:02:35.289556 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod208d609e_4bd2_4e07_8247_c817513cd3f1.slice/crio-42d56fa56f1b4d6e12c9105d2371eeae5e590e07cd19283de410bfffaa01b969 WatchSource:0}: Error finding container 42d56fa56f1b4d6e12c9105d2371eeae5e590e07cd19283de410bfffaa01b969: Status 404 returned error can't find the container with id 42d56fa56f1b4d6e12c9105d2371eeae5e590e07cd19283de410bfffaa01b969 Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.301207 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg"] Jan 27 07:02:35 crc kubenswrapper[4729]: W0127 07:02:35.301866 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7167efa_455e_473c_9a89_d9a1b55f6523.slice/crio-1650015a14ee06ffebb66738700e4fa7f0842bd924c1d6b13cb08257b46ef84e WatchSource:0}: Error finding container 1650015a14ee06ffebb66738700e4fa7f0842bd924c1d6b13cb08257b46ef84e: Status 404 returned error can't find the container with id 1650015a14ee06ffebb66738700e4fa7f0842bd924c1d6b13cb08257b46ef84e Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.304950 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww"] Jan 27 07:02:35 crc kubenswrapper[4729]: W0127 07:02:35.306834 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8bab3ea_2957_43a7_894e_25c3d7b54287.slice/crio-6f99724201f26eadde490e3401a245fc4829ff074223df95a6d06b5aad798154 WatchSource:0}: Error finding container 6f99724201f26eadde490e3401a245fc4829ff074223df95a6d06b5aad798154: Status 404 returned error can't find the container with id 6f99724201f26eadde490e3401a245fc4829ff074223df95a6d06b5aad798154 Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.309727 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p"] Jan 27 07:02:35 crc kubenswrapper[4729]: W0127 07:02:35.314016 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2651401b_47d5_40f2_aaf2_316078ea8ab4.slice/crio-94b7ec2c6e861a0b2a6b103b3e3624e6294c47adf7ede0db06f61e4a7cd9dffa WatchSource:0}: Error finding container 94b7ec2c6e861a0b2a6b103b3e3624e6294c47adf7ede0db06f61e4a7cd9dffa: Status 404 returned error can't find the container with id 94b7ec2c6e861a0b2a6b103b3e3624e6294c47adf7ede0db06f61e4a7cd9dffa Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.317654 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.317729 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.317880 4729 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.317926 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs podName:3608e6ab-1167-45e0-a01a-48f0339c45e0 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:36.317911086 +0000 UTC m=+921.385032349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs") pod "openstack-operator-controller-manager-64d6b84b7b-qd5rn" (UID: "3608e6ab-1167-45e0-a01a-48f0339c45e0") : secret "webhook-server-cert" not found Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.317962 4729 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.317980 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs podName:3608e6ab-1167-45e0-a01a-48f0339c45e0 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:36.317974058 +0000 UTC m=+921.385095311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs") pod "openstack-operator-controller-manager-64d6b84b7b-qd5rn" (UID: "3608e6ab-1167-45e0-a01a-48f0339c45e0") : secret "metrics-server-cert" not found Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.317997 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf"] Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.323804 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24"] Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.423026 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.426737 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.426826 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert podName:411fadb3-87a6-4a23-b0a8-a6ba4f5e529c nodeName:}" failed. No retries permitted until 2026-01-27 07:02:37.426805222 +0000 UTC m=+922.493926485 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert") pod "infra-operator-controller-manager-7d75bc88d5-fbmgs" (UID: "411fadb3-87a6-4a23-b0a8-a6ba4f5e529c") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.445705 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj"] Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.451448 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm"] Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.464144 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2d99p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-799bc87c89-24xxm_openstack-operators(9b7c6228-376e-4dd1-a3bc-6d93e27cbf60): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.466102 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" podUID="9b7c6228-376e-4dd1-a3bc-6d93e27cbf60" Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.469136 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6"] Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.473789 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb"] Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.478789 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6qgck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-bkkbb_openstack-operators(59e3aaff-61c0-4878-bd07-8a83e07eda58): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.482200 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" podUID="59e3aaff-61c0-4878-bd07-8a83e07eda58" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.494892 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2r6f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-cqxx6_openstack-operators(9fafbb5d-f848-4dc6-9bb4-4101924b2ab2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.496184 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" podUID="9fafbb5d-f848-4dc6-9bb4-4101924b2ab2" Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.524668 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc"] Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.539476 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb"] Jan 27 07:02:35 crc kubenswrapper[4729]: W0127 07:02:35.549917 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e370583_f827_4e73_a360_10ea26ad6c94.slice/crio-536285b62d40e089a0a3cf85c378595ccf81abc6d0fb3d5e1a97cbf5e5936e5e WatchSource:0}: Error finding container 536285b62d40e089a0a3cf85c378595ccf81abc6d0fb3d5e1a97cbf5e5936e5e: Status 404 returned error can't find the container with id 536285b62d40e089a0a3cf85c378595ccf81abc6d0fb3d5e1a97cbf5e5936e5e Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.552214 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf"] Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.554592 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqkww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-ngvsf_openstack-operators(53cd31cd-4d1b-4d0c-8d48-1826850ed320): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.554958 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljt7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-mphcb_openstack-operators(aa0fd3d1-eec0-44e5-a28d-d009ae449ee0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.556140 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" podUID="aa0fd3d1-eec0-44e5-a28d-d009ae449ee0" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.556540 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" podUID="53cd31cd-4d1b-4d0c-8d48-1826850ed320" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.558657 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vhbq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mtbcm_openstack-operators(3e370583-f827-4e73-a360-10ea26ad6c94): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.559807 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" podUID="3e370583-f827-4e73-a360-10ea26ad6c94" Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.562214 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm"] Jan 27 07:02:35 crc kubenswrapper[4729]: I0127 07:02:35.733353 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.733499 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:35 crc kubenswrapper[4729]: E0127 07:02:35.733554 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert podName:2dddb667-9f1f-4fd1-a5be-c1eac664059e nodeName:}" failed. No retries permitted until 2026-01-27 07:02:37.733539158 +0000 UTC m=+922.800660421 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" (UID: "2dddb667-9f1f-4fd1-a5be-c1eac664059e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.018246 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" event={"ID":"5504ffe5-4dd8-4ee6-a3d1-cd609ef82770","Type":"ContainerStarted","Data":"450f02ace4c1bf0fe2053b53fcbb89a195f21e1a2ee4b0c941c6bf00e4b22e55"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.023790 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24" event={"ID":"f0efecd6-a0c0-419c-a650-4e82d66a0080","Type":"ContainerStarted","Data":"247cc92d6f941cff2d626da756e9fc5b7307fdc1511dc7dc63144f8bdc7e6b4e"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.030466 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" event={"ID":"7770e510-40b8-480e-a29b-f92bee72ccbb","Type":"ContainerStarted","Data":"50ef0f51b5efa065351b398fbb45aa07a92e4fb8941cbccf6c310f314f15078a"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.036325 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b" event={"ID":"1363c91a-c57e-47da-b442-1331d4a4e4c1","Type":"ContainerStarted","Data":"429f8a3c4ec36e5d81407c645c3055e632195bb6f9186623e13782472c746551"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.039207 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" event={"ID":"aa0fd3d1-eec0-44e5-a28d-d009ae449ee0","Type":"ContainerStarted","Data":"8805c5e48bff9ead553b17c081d0812f9dfff5a1ffd69e25362812f1e354c355"} Jan 27 07:02:36 crc kubenswrapper[4729]: E0127 07:02:36.044604 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" podUID="aa0fd3d1-eec0-44e5-a28d-d009ae449ee0" Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.065764 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf" event={"ID":"2651401b-47d5-40f2-aaf2-316078ea8ab4","Type":"ContainerStarted","Data":"94b7ec2c6e861a0b2a6b103b3e3624e6294c47adf7ede0db06f61e4a7cd9dffa"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.081411 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" event={"ID":"9fafbb5d-f848-4dc6-9bb4-4101924b2ab2","Type":"ContainerStarted","Data":"48b82074024d50e1322a94a2908708488405c9b8990fcea5fb5a1fb494e31067"} Jan 27 07:02:36 crc kubenswrapper[4729]: E0127 07:02:36.082932 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" podUID="9fafbb5d-f848-4dc6-9bb4-4101924b2ab2" Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.090806 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" event={"ID":"b7167efa-455e-473c-9a89-d9a1b55f6523","Type":"ContainerStarted","Data":"1650015a14ee06ffebb66738700e4fa7f0842bd924c1d6b13cb08257b46ef84e"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.092266 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" event={"ID":"f8bab3ea-2957-43a7-894e-25c3d7b54287","Type":"ContainerStarted","Data":"6f99724201f26eadde490e3401a245fc4829ff074223df95a6d06b5aad798154"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.111314 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" event={"ID":"9b7c6228-376e-4dd1-a3bc-6d93e27cbf60","Type":"ContainerStarted","Data":"ce54208ec75def41184dc30a7f31b2bd135bb51ebb9e53b26ad6cafe1fe3c29d"} Jan 27 07:02:36 crc kubenswrapper[4729]: E0127 07:02:36.118256 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" podUID="9b7c6228-376e-4dd1-a3bc-6d93e27cbf60" Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.170786 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" event={"ID":"3413118b-46c0-420a-b381-b2fb3c3e7d5f","Type":"ContainerStarted","Data":"ee7cfa3e76491c0bba2306af934c3187d277a8dff73b957409303428941fcbbd"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.176829 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" event={"ID":"208d609e-4bd2-4e07-8247-c817513cd3f1","Type":"ContainerStarted","Data":"42d56fa56f1b4d6e12c9105d2371eeae5e590e07cd19283de410bfffaa01b969"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.179323 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" event={"ID":"53cd31cd-4d1b-4d0c-8d48-1826850ed320","Type":"ContainerStarted","Data":"7be685b4f240f7804327a99baa8130aff61061ccea7448467f6c614b9834097d"} Jan 27 07:02:36 crc kubenswrapper[4729]: E0127 07:02:36.180662 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" podUID="53cd31cd-4d1b-4d0c-8d48-1826850ed320" Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.181734 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" event={"ID":"746a8b7a-6b43-4265-8766-03f4d5682468","Type":"ContainerStarted","Data":"6084a58b2aa61f711d2c1d27e62f40876b893a3c77aacbb6a732bd33fd5331b9"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.204961 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" event={"ID":"59e3aaff-61c0-4878-bd07-8a83e07eda58","Type":"ContainerStarted","Data":"f9c3fe6f0b9220ee8cab0414927f5415e7733ad3d00c538c866aa306f7a29fb7"} Jan 27 07:02:36 crc kubenswrapper[4729]: E0127 07:02:36.231598 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" podUID="59e3aaff-61c0-4878-bd07-8a83e07eda58" Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.255259 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" event={"ID":"3e370583-f827-4e73-a360-10ea26ad6c94","Type":"ContainerStarted","Data":"536285b62d40e089a0a3cf85c378595ccf81abc6d0fb3d5e1a97cbf5e5936e5e"} Jan 27 07:02:36 crc kubenswrapper[4729]: E0127 07:02:36.259331 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" podUID="3e370583-f827-4e73-a360-10ea26ad6c94" Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.269804 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc" event={"ID":"d75de196-b27a-44e6-a707-210294d056f5","Type":"ContainerStarted","Data":"eb4024078ed14b62756f1b12924663d9d95f83bbd20969d9ab71daf46807efcc"} Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.345051 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:36 crc kubenswrapper[4729]: I0127 07:02:36.345206 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:36 crc kubenswrapper[4729]: E0127 07:02:36.345414 4729 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:02:36 crc kubenswrapper[4729]: E0127 07:02:36.345477 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs podName:3608e6ab-1167-45e0-a01a-48f0339c45e0 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:38.345461251 +0000 UTC m=+923.412582514 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs") pod "openstack-operator-controller-manager-64d6b84b7b-qd5rn" (UID: "3608e6ab-1167-45e0-a01a-48f0339c45e0") : secret "webhook-server-cert" not found Jan 27 07:02:36 crc kubenswrapper[4729]: E0127 07:02:36.345517 4729 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:02:36 crc kubenswrapper[4729]: E0127 07:02:36.345691 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs podName:3608e6ab-1167-45e0-a01a-48f0339c45e0 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:38.345673567 +0000 UTC m=+923.412794830 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs") pod "openstack-operator-controller-manager-64d6b84b7b-qd5rn" (UID: "3608e6ab-1167-45e0-a01a-48f0339c45e0") : secret "metrics-server-cert" not found Jan 27 07:02:37 crc kubenswrapper[4729]: E0127 07:02:37.279963 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" podUID="aa0fd3d1-eec0-44e5-a28d-d009ae449ee0" Jan 27 07:02:37 crc kubenswrapper[4729]: E0127 07:02:37.281411 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" podUID="9b7c6228-376e-4dd1-a3bc-6d93e27cbf60" Jan 27 07:02:37 crc kubenswrapper[4729]: E0127 07:02:37.281469 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" podUID="3e370583-f827-4e73-a360-10ea26ad6c94" Jan 27 07:02:37 crc kubenswrapper[4729]: E0127 07:02:37.281514 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" podUID="59e3aaff-61c0-4878-bd07-8a83e07eda58" Jan 27 07:02:37 crc kubenswrapper[4729]: E0127 07:02:37.281551 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" podUID="9fafbb5d-f848-4dc6-9bb4-4101924b2ab2" Jan 27 07:02:37 crc kubenswrapper[4729]: E0127 07:02:37.283649 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" podUID="53cd31cd-4d1b-4d0c-8d48-1826850ed320" Jan 27 07:02:37 crc kubenswrapper[4729]: I0127 07:02:37.477205 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:37 crc kubenswrapper[4729]: E0127 07:02:37.477429 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:37 crc kubenswrapper[4729]: E0127 07:02:37.477497 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert podName:411fadb3-87a6-4a23-b0a8-a6ba4f5e529c nodeName:}" failed. No retries permitted until 2026-01-27 07:02:41.477479682 +0000 UTC m=+926.544600945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert") pod "infra-operator-controller-manager-7d75bc88d5-fbmgs" (UID: "411fadb3-87a6-4a23-b0a8-a6ba4f5e529c") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:37 crc kubenswrapper[4729]: I0127 07:02:37.788185 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:37 crc kubenswrapper[4729]: E0127 07:02:37.788385 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:37 crc kubenswrapper[4729]: E0127 07:02:37.788465 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert podName:2dddb667-9f1f-4fd1-a5be-c1eac664059e nodeName:}" failed. No retries permitted until 2026-01-27 07:02:41.788445382 +0000 UTC m=+926.855566645 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" (UID: "2dddb667-9f1f-4fd1-a5be-c1eac664059e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:38 crc kubenswrapper[4729]: I0127 07:02:38.396134 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:38 crc kubenswrapper[4729]: I0127 07:02:38.396210 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:38 crc kubenswrapper[4729]: E0127 07:02:38.396316 4729 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:02:38 crc kubenswrapper[4729]: E0127 07:02:38.396329 4729 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:02:38 crc kubenswrapper[4729]: E0127 07:02:38.396364 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs podName:3608e6ab-1167-45e0-a01a-48f0339c45e0 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:42.396349978 +0000 UTC m=+927.463471241 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs") pod "openstack-operator-controller-manager-64d6b84b7b-qd5rn" (UID: "3608e6ab-1167-45e0-a01a-48f0339c45e0") : secret "webhook-server-cert" not found Jan 27 07:02:38 crc kubenswrapper[4729]: E0127 07:02:38.396405 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs podName:3608e6ab-1167-45e0-a01a-48f0339c45e0 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:42.396386979 +0000 UTC m=+927.463508242 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs") pod "openstack-operator-controller-manager-64d6b84b7b-qd5rn" (UID: "3608e6ab-1167-45e0-a01a-48f0339c45e0") : secret "metrics-server-cert" not found Jan 27 07:02:41 crc kubenswrapper[4729]: I0127 07:02:41.541196 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:41 crc kubenswrapper[4729]: E0127 07:02:41.541435 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:41 crc kubenswrapper[4729]: E0127 07:02:41.541934 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert podName:411fadb3-87a6-4a23-b0a8-a6ba4f5e529c nodeName:}" failed. No retries permitted until 2026-01-27 07:02:49.54189892 +0000 UTC m=+934.609020213 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert") pod "infra-operator-controller-manager-7d75bc88d5-fbmgs" (UID: "411fadb3-87a6-4a23-b0a8-a6ba4f5e529c") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:41 crc kubenswrapper[4729]: I0127 07:02:41.845825 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:41 crc kubenswrapper[4729]: E0127 07:02:41.846016 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:41 crc kubenswrapper[4729]: E0127 07:02:41.846158 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert podName:2dddb667-9f1f-4fd1-a5be-c1eac664059e nodeName:}" failed. No retries permitted until 2026-01-27 07:02:49.846137808 +0000 UTC m=+934.913259071 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" (UID: "2dddb667-9f1f-4fd1-a5be-c1eac664059e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:42 crc kubenswrapper[4729]: I0127 07:02:42.455721 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:42 crc kubenswrapper[4729]: E0127 07:02:42.455857 4729 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:02:42 crc kubenswrapper[4729]: E0127 07:02:42.455945 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs podName:3608e6ab-1167-45e0-a01a-48f0339c45e0 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:50.455928833 +0000 UTC m=+935.523050096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs") pod "openstack-operator-controller-manager-64d6b84b7b-qd5rn" (UID: "3608e6ab-1167-45e0-a01a-48f0339c45e0") : secret "webhook-server-cert" not found Jan 27 07:02:42 crc kubenswrapper[4729]: I0127 07:02:42.456557 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:42 crc kubenswrapper[4729]: E0127 07:02:42.456645 4729 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:02:42 crc kubenswrapper[4729]: E0127 07:02:42.456672 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs podName:3608e6ab-1167-45e0-a01a-48f0339c45e0 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:50.456662777 +0000 UTC m=+935.523784040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs") pod "openstack-operator-controller-manager-64d6b84b7b-qd5rn" (UID: "3608e6ab-1167-45e0-a01a-48f0339c45e0") : secret "metrics-server-cert" not found Jan 27 07:02:48 crc kubenswrapper[4729]: E0127 07:02:48.302024 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84" Jan 27 07:02:48 crc kubenswrapper[4729]: E0127 07:02:48.302586 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrhnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-849fcfbb6b-ltqnm_openstack-operators(7770e510-40b8-480e-a29b-f92bee72ccbb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:02:48 crc kubenswrapper[4729]: E0127 07:02:48.304450 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" podUID="7770e510-40b8-480e-a29b-f92bee72ccbb" Jan 27 07:02:48 crc kubenswrapper[4729]: E0127 07:02:48.364930 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" podUID="7770e510-40b8-480e-a29b-f92bee72ccbb" Jan 27 07:02:49 crc kubenswrapper[4729]: I0127 07:02:49.565804 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:02:49 crc kubenswrapper[4729]: E0127 07:02:49.566001 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:49 crc kubenswrapper[4729]: E0127 07:02:49.566085 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert podName:411fadb3-87a6-4a23-b0a8-a6ba4f5e529c nodeName:}" failed. No retries permitted until 2026-01-27 07:03:05.566054065 +0000 UTC m=+950.633175328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert") pod "infra-operator-controller-manager-7d75bc88d5-fbmgs" (UID: "411fadb3-87a6-4a23-b0a8-a6ba4f5e529c") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:49 crc kubenswrapper[4729]: I0127 07:02:49.872928 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:02:49 crc kubenswrapper[4729]: E0127 07:02:49.873147 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:49 crc kubenswrapper[4729]: E0127 07:02:49.873221 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert podName:2dddb667-9f1f-4fd1-a5be-c1eac664059e nodeName:}" failed. No retries permitted until 2026-01-27 07:03:05.873203554 +0000 UTC m=+950.940324817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" (UID: "2dddb667-9f1f-4fd1-a5be-c1eac664059e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:50 crc kubenswrapper[4729]: I0127 07:02:50.483652 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:50 crc kubenswrapper[4729]: I0127 07:02:50.483895 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:50 crc kubenswrapper[4729]: I0127 07:02:50.490109 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:50 crc kubenswrapper[4729]: I0127 07:02:50.490141 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3608e6ab-1167-45e0-a01a-48f0339c45e0-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-qd5rn\" (UID: \"3608e6ab-1167-45e0-a01a-48f0339c45e0\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:50 crc kubenswrapper[4729]: I0127 07:02:50.555582 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:02:50 crc kubenswrapper[4729]: E0127 07:02:50.966708 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49" Jan 27 07:02:50 crc kubenswrapper[4729]: E0127 07:02:50.966954 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wx7pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7875d7675-sl5ww_openstack-operators(208d609e-4bd2-4e07-8247-c817513cd3f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:02:50 crc kubenswrapper[4729]: E0127 07:02:50.968293 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" podUID="208d609e-4bd2-4e07-8247-c817513cd3f1" Jan 27 07:02:51 crc kubenswrapper[4729]: E0127 07:02:51.390945 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" podUID="208d609e-4bd2-4e07-8247-c817513cd3f1" Jan 27 07:02:51 crc kubenswrapper[4729]: E0127 07:02:51.533250 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977" Jan 27 07:02:51 crc kubenswrapper[4729]: E0127 07:02:51.533471 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v9d8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-65ff799cfd-2mstg_openstack-operators(f8bab3ea-2957-43a7-894e-25c3d7b54287): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:02:51 crc kubenswrapper[4729]: E0127 07:02:51.534600 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" podUID="f8bab3ea-2957-43a7-894e-25c3d7b54287" Jan 27 07:02:52 crc kubenswrapper[4729]: E0127 07:02:52.399147 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" podUID="f8bab3ea-2957-43a7-894e-25c3d7b54287" Jan 27 07:02:53 crc kubenswrapper[4729]: E0127 07:02:53.766016 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569" Jan 27 07:02:53 crc kubenswrapper[4729]: E0127 07:02:53.766683 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tcsqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-z8hhj_openstack-operators(746a8b7a-6b43-4265-8766-03f4d5682468): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:02:53 crc kubenswrapper[4729]: E0127 07:02:53.768440 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" podUID="746a8b7a-6b43-4265-8766-03f4d5682468" Jan 27 07:02:54 crc kubenswrapper[4729]: E0127 07:02:54.411278 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" podUID="746a8b7a-6b43-4265-8766-03f4d5682468" Jan 27 07:02:54 crc kubenswrapper[4729]: E0127 07:02:54.555090 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:b32a57b31e821e6c2547e43b39d064a265af889add25d7693914d86368bfd76c" Jan 27 07:02:54 crc kubenswrapper[4729]: E0127 07:02:54.555261 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:b32a57b31e821e6c2547e43b39d064a265af889add25d7693914d86368bfd76c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cpfm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-74866cc64d-ggjqc_openstack-operators(5504ffe5-4dd8-4ee6-a3d1-cd609ef82770): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:02:54 crc kubenswrapper[4729]: E0127 07:02:54.556421 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" podUID="5504ffe5-4dd8-4ee6-a3d1-cd609ef82770" Jan 27 07:02:55 crc kubenswrapper[4729]: E0127 07:02:55.419141 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:b32a57b31e821e6c2547e43b39d064a265af889add25d7693914d86368bfd76c\\\"\"" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" podUID="5504ffe5-4dd8-4ee6-a3d1-cd609ef82770" Jan 27 07:02:55 crc kubenswrapper[4729]: E0127 07:02:55.727526 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 27 07:02:55 crc kubenswrapper[4729]: E0127 07:02:55.727861 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqzqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-lhb4p_openstack-operators(3413118b-46c0-420a-b381-b2fb3c3e7d5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:02:55 crc kubenswrapper[4729]: E0127 07:02:55.729617 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" podUID="3413118b-46c0-420a-b381-b2fb3c3e7d5f" Jan 27 07:02:56 crc kubenswrapper[4729]: E0127 07:02:56.423970 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" podUID="3413118b-46c0-420a-b381-b2fb3c3e7d5f" Jan 27 07:02:56 crc kubenswrapper[4729]: E0127 07:02:56.445504 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 27 07:02:56 crc kubenswrapper[4729]: E0127 07:02:56.445658 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hwx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-kl8s6_openstack-operators(b7167efa-455e-473c-9a89-d9a1b55f6523): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:02:56 crc kubenswrapper[4729]: E0127 07:02:56.446821 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" podUID="b7167efa-455e-473c-9a89-d9a1b55f6523" Jan 27 07:02:57 crc kubenswrapper[4729]: E0127 07:02:57.432001 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" podUID="b7167efa-455e-473c-9a89-d9a1b55f6523" Jan 27 07:03:01 crc kubenswrapper[4729]: I0127 07:03:01.087867 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:03:01 crc kubenswrapper[4729]: I0127 07:03:01.088329 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:03:01 crc kubenswrapper[4729]: I0127 07:03:01.088394 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 07:03:01 crc kubenswrapper[4729]: I0127 07:03:01.089310 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1461a2d53f2bcef42ef23fcff0f45288da459bca413167e7db07cd4d0b094c40"} pod="openshift-machine-config-operator/machine-config-daemon-5x25t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:03:01 crc kubenswrapper[4729]: I0127 07:03:01.089406 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" containerID="cri-o://1461a2d53f2bcef42ef23fcff0f45288da459bca413167e7db07cd4d0b094c40" gracePeriod=600 Jan 27 07:03:01 crc kubenswrapper[4729]: I0127 07:03:01.455472 4729 generic.go:334] "Generic (PLEG): container finished" podID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerID="1461a2d53f2bcef42ef23fcff0f45288da459bca413167e7db07cd4d0b094c40" exitCode=0 Jan 27 07:03:01 crc kubenswrapper[4729]: I0127 07:03:01.455524 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerDied","Data":"1461a2d53f2bcef42ef23fcff0f45288da459bca413167e7db07cd4d0b094c40"} Jan 27 07:03:01 crc kubenswrapper[4729]: I0127 07:03:01.455563 4729 scope.go:117] "RemoveContainer" containerID="68bfd80bb6423f08d69b967fb2f7f062eecb3f011230ca94aaaa763bc9d7775b" Jan 27 07:03:04 crc kubenswrapper[4729]: I0127 07:03:04.759516 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn"] Jan 27 07:03:04 crc kubenswrapper[4729]: W0127 07:03:04.834817 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3608e6ab_1167_45e0_a01a_48f0339c45e0.slice/crio-994a24826fe420b0a6b95a69c3a63a1a92da75f5269bd28f13466f626546dbaa WatchSource:0}: Error finding container 994a24826fe420b0a6b95a69c3a63a1a92da75f5269bd28f13466f626546dbaa: Status 404 returned error can't find the container with id 994a24826fe420b0a6b95a69c3a63a1a92da75f5269bd28f13466f626546dbaa Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.499907 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" event={"ID":"9b7c6228-376e-4dd1-a3bc-6d93e27cbf60","Type":"ContainerStarted","Data":"6ada392bb9b2c3f5e99385c8cc9b648010adc9ed61fb2bc8d32b8143e1bd0a19"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.500572 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.501292 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv" event={"ID":"841f8e61-a7f0-4cd6-9554-bbcd391a9431","Type":"ContainerStarted","Data":"e9ed755dba2b54d490e28242e7ab5e3626e338e0d2d91dacd553a8c7b379ad88"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.501361 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.502777 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" event={"ID":"59e3aaff-61c0-4878-bd07-8a83e07eda58","Type":"ContainerStarted","Data":"733495d639fb635ab0e2f34788c406494dc740eac1deb255cfee7c85bf1c7704"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.502978 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.504176 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" event={"ID":"aa0fd3d1-eec0-44e5-a28d-d009ae449ee0","Type":"ContainerStarted","Data":"aa9532e14d9f3db654771096b6ed6b961f1a4f0ab95e78903c269759bf4577f0"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.504596 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.505715 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc" event={"ID":"d75de196-b27a-44e6-a707-210294d056f5","Type":"ContainerStarted","Data":"43d4115da009c5dc215f9c4870350dd029f2f7ff41aa9b20efaeb39f9e0e2707"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.505857 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.507052 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24" event={"ID":"f0efecd6-a0c0-419c-a650-4e82d66a0080","Type":"ContainerStarted","Data":"0db1b4195a87fa3a809368b4b863c4212621208e750d0a6e8dae7002871cf9cf"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.507190 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.508607 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms" event={"ID":"8fae3d38-94af-48f2-a91d-b465752c4d15","Type":"ContainerStarted","Data":"934078d9000a46d42d03136f45b8f52bd05a886f846ec478598521deeb21ebfe"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.508704 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.510666 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerStarted","Data":"cf9c58cc07f8daa1a56682b1bdc8052b20a895a3b6aef5e32fcf24b19c7ab2a0"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.512112 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b" event={"ID":"1363c91a-c57e-47da-b442-1331d4a4e4c1","Type":"ContainerStarted","Data":"9742a860f41bc97aade1f7487eb47d2872910b963a1873a555b0d02e20221019"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.512240 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.513028 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" event={"ID":"3608e6ab-1167-45e0-a01a-48f0339c45e0","Type":"ContainerStarted","Data":"994a24826fe420b0a6b95a69c3a63a1a92da75f5269bd28f13466f626546dbaa"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.514516 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf" event={"ID":"68e763f0-9028-45f1-9a58-c19c5b0189ff","Type":"ContainerStarted","Data":"f3b58a41ff9a58660e618428bdbf7a95115676bdcefcf1c6aa39d56019bbe6d5"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.514589 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.515969 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf" event={"ID":"2651401b-47d5-40f2-aaf2-316078ea8ab4","Type":"ContainerStarted","Data":"de881ead8348b6f7f791ba98ce457a2de80f9fec8b2c5e80d4987639677f9922"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.516066 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.517407 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" event={"ID":"9fafbb5d-f848-4dc6-9bb4-4101924b2ab2","Type":"ContainerStarted","Data":"14a13710e63126a37557724538690204bcb5c491214d88b3c5e8678729c96aa9"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.517583 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.518913 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" event={"ID":"53cd31cd-4d1b-4d0c-8d48-1826850ed320","Type":"ContainerStarted","Data":"7787815a2b0199d856f1062358bc8bdd0814651264a17749aa116a2016f0af5b"} Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.519118 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.541474 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" podStartSLOduration=3.611270187 podStartE2EDuration="32.541459808s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.463898839 +0000 UTC m=+920.531020102" lastFinishedPulling="2026-01-27 07:03:04.39408846 +0000 UTC m=+949.461209723" observedRunningTime="2026-01-27 07:03:05.539868477 +0000 UTC m=+950.606989740" watchObservedRunningTime="2026-01-27 07:03:05.541459808 +0000 UTC m=+950.608581061" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.559689 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24" podStartSLOduration=9.586478146 podStartE2EDuration="32.559670506s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.324951109 +0000 UTC m=+920.392072372" lastFinishedPulling="2026-01-27 07:02:58.298143429 +0000 UTC m=+943.365264732" observedRunningTime="2026-01-27 07:03:05.558168489 +0000 UTC m=+950.625289772" watchObservedRunningTime="2026-01-27 07:03:05.559670506 +0000 UTC m=+950.626791769" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.567341 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.584134 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/411fadb3-87a6-4a23-b0a8-a6ba4f5e529c-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-fbmgs\" (UID: \"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.597144 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf" podStartSLOduration=10.785643938 podStartE2EDuration="32.597127025s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.327459858 +0000 UTC m=+920.394581121" lastFinishedPulling="2026-01-27 07:02:57.138942945 +0000 UTC m=+942.206064208" observedRunningTime="2026-01-27 07:03:05.59224211 +0000 UTC m=+950.659363393" watchObservedRunningTime="2026-01-27 07:03:05.597127025 +0000 UTC m=+950.664248288" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.674004 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" podStartSLOduration=3.834279966 podStartE2EDuration="32.673989584s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.55433838 +0000 UTC m=+920.621459643" lastFinishedPulling="2026-01-27 07:03:04.394047998 +0000 UTC m=+949.461169261" observedRunningTime="2026-01-27 07:03:05.666322681 +0000 UTC m=+950.733444224" watchObservedRunningTime="2026-01-27 07:03:05.673989584 +0000 UTC m=+950.741110847" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.674206 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" podStartSLOduration=3.8030241030000003 podStartE2EDuration="32.674201811s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.494740908 +0000 UTC m=+920.561862171" lastFinishedPulling="2026-01-27 07:03:04.365918616 +0000 UTC m=+949.433039879" observedRunningTime="2026-01-27 07:03:05.621425996 +0000 UTC m=+950.688547279" watchObservedRunningTime="2026-01-27 07:03:05.674201811 +0000 UTC m=+950.741323074" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.703534 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" podStartSLOduration=3.842811947 podStartE2EDuration="32.703517792s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.478646137 +0000 UTC m=+920.545767390" lastFinishedPulling="2026-01-27 07:03:04.339351972 +0000 UTC m=+949.406473235" observedRunningTime="2026-01-27 07:03:05.700211607 +0000 UTC m=+950.767332880" watchObservedRunningTime="2026-01-27 07:03:05.703517792 +0000 UTC m=+950.770639055" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.724904 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.733444 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc" podStartSLOduration=11.125011339 podStartE2EDuration="32.733428781s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.531740392 +0000 UTC m=+920.598861655" lastFinishedPulling="2026-01-27 07:02:57.140157844 +0000 UTC m=+942.207279097" observedRunningTime="2026-01-27 07:03:05.732746549 +0000 UTC m=+950.799867812" watchObservedRunningTime="2026-01-27 07:03:05.733428781 +0000 UTC m=+950.800550044" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.766440 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms" podStartSLOduration=23.546238811 podStartE2EDuration="32.766424238s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:34.713383568 +0000 UTC m=+919.780504831" lastFinishedPulling="2026-01-27 07:02:43.933568985 +0000 UTC m=+929.000690258" observedRunningTime="2026-01-27 07:03:05.763813245 +0000 UTC m=+950.830934508" watchObservedRunningTime="2026-01-27 07:03:05.766424238 +0000 UTC m=+950.833545501" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.786042 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf" podStartSLOduration=10.512706724 podStartE2EDuration="32.78602881s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:34.865611419 +0000 UTC m=+919.932732682" lastFinishedPulling="2026-01-27 07:02:57.138933505 +0000 UTC m=+942.206054768" observedRunningTime="2026-01-27 07:03:05.782747507 +0000 UTC m=+950.849868760" watchObservedRunningTime="2026-01-27 07:03:05.78602881 +0000 UTC m=+950.853150073" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.835771 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b" podStartSLOduration=9.826301058 podStartE2EDuration="32.835751659s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.289082591 +0000 UTC m=+920.356203854" lastFinishedPulling="2026-01-27 07:02:58.298533192 +0000 UTC m=+943.365654455" observedRunningTime="2026-01-27 07:03:05.83389884 +0000 UTC m=+950.901020123" watchObservedRunningTime="2026-01-27 07:03:05.835751659 +0000 UTC m=+950.902872932" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.879033 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" podStartSLOduration=4.058570756 podStartE2EDuration="32.879010312s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.55466545 +0000 UTC m=+920.621786713" lastFinishedPulling="2026-01-27 07:03:04.375105006 +0000 UTC m=+949.442226269" observedRunningTime="2026-01-27 07:03:05.876335787 +0000 UTC m=+950.943457050" watchObservedRunningTime="2026-01-27 07:03:05.879010312 +0000 UTC m=+950.946131575" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.937539 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv" podStartSLOduration=10.005593918 podStartE2EDuration="32.937519899s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:34.751793917 +0000 UTC m=+919.818915170" lastFinishedPulling="2026-01-27 07:02:57.683719888 +0000 UTC m=+942.750841151" observedRunningTime="2026-01-27 07:03:05.932437268 +0000 UTC m=+950.999558591" watchObservedRunningTime="2026-01-27 07:03:05.937519899 +0000 UTC m=+951.004641162" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.972137 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:03:05 crc kubenswrapper[4729]: I0127 07:03:05.975729 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dddb667-9f1f-4fd1-a5be-c1eac664059e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7\" (UID: \"2dddb667-9f1f-4fd1-a5be-c1eac664059e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.090511 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.560684 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" event={"ID":"3e370583-f827-4e73-a360-10ea26ad6c94","Type":"ContainerStarted","Data":"164c48fa306a64f49ab399eac2105c8ea353926fa9c9ab96b4773eff0604f9ed"} Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.591243 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" event={"ID":"3608e6ab-1167-45e0-a01a-48f0339c45e0","Type":"ContainerStarted","Data":"31cb22670655337b50a1a4bebe455eff642c4532e1bd90eb59aa001a521f36f8"} Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.591841 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.593335 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs"] Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.607241 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" event={"ID":"f8bab3ea-2957-43a7-894e-25c3d7b54287","Type":"ContainerStarted","Data":"5c6020c21a9c29feaad6e7f8e5d522533a98bca0d744e528d6b9b0c8ae3fbd89"} Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.607749 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.620946 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" event={"ID":"7770e510-40b8-480e-a29b-f92bee72ccbb","Type":"ContainerStarted","Data":"442c9b892d30b2f3b9a91015a78e9c445ba9e811cfbcd4ce2a354037ddb91115"} Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.621359 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.650860 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mtbcm" podStartSLOduration=3.724154662 podStartE2EDuration="32.650842181s" podCreationTimestamp="2026-01-27 07:02:34 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.558488452 +0000 UTC m=+920.625609715" lastFinishedPulling="2026-01-27 07:03:04.485175971 +0000 UTC m=+949.552297234" observedRunningTime="2026-01-27 07:03:06.632600002 +0000 UTC m=+951.699721265" watchObservedRunningTime="2026-01-27 07:03:06.650842181 +0000 UTC m=+951.717963444" Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.703575 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" podStartSLOduration=32.703557014 podStartE2EDuration="32.703557014s" podCreationTimestamp="2026-01-27 07:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:03:06.703128631 +0000 UTC m=+951.770249904" watchObservedRunningTime="2026-01-27 07:03:06.703557014 +0000 UTC m=+951.770678277" Jan 27 07:03:06 crc kubenswrapper[4729]: I0127 07:03:06.754131 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" podStartSLOduration=4.590524881 podStartE2EDuration="33.754114099s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.308777806 +0000 UTC m=+920.375899069" lastFinishedPulling="2026-01-27 07:03:04.472367014 +0000 UTC m=+949.539488287" observedRunningTime="2026-01-27 07:03:06.729738945 +0000 UTC m=+951.796860218" watchObservedRunningTime="2026-01-27 07:03:06.754114099 +0000 UTC m=+951.821235362" Jan 27 07:03:07 crc kubenswrapper[4729]: I0127 07:03:07.257197 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" podStartSLOduration=4.952893783 podStartE2EDuration="34.257181587s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.10728616 +0000 UTC m=+920.174407423" lastFinishedPulling="2026-01-27 07:03:04.411573964 +0000 UTC m=+949.478695227" observedRunningTime="2026-01-27 07:03:06.75543495 +0000 UTC m=+951.822556213" watchObservedRunningTime="2026-01-27 07:03:07.257181587 +0000 UTC m=+952.324302850" Jan 27 07:03:07 crc kubenswrapper[4729]: I0127 07:03:07.265425 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7"] Jan 27 07:03:07 crc kubenswrapper[4729]: I0127 07:03:07.629100 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" event={"ID":"5504ffe5-4dd8-4ee6-a3d1-cd609ef82770","Type":"ContainerStarted","Data":"0ed8a4b5eaac7dcb04ded451862eb6a5e3109d6324cfbeca94098401f715a3f0"} Jan 27 07:03:07 crc kubenswrapper[4729]: I0127 07:03:07.629382 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" Jan 27 07:03:07 crc kubenswrapper[4729]: I0127 07:03:07.632198 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" event={"ID":"208d609e-4bd2-4e07-8247-c817513cd3f1","Type":"ContainerStarted","Data":"b62784e66e56a8712079656219a411a3790629ada3b431926150a99ab910b0a2"} Jan 27 07:03:07 crc kubenswrapper[4729]: I0127 07:03:07.632409 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" Jan 27 07:03:07 crc kubenswrapper[4729]: I0127 07:03:07.633221 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" event={"ID":"2dddb667-9f1f-4fd1-a5be-c1eac664059e","Type":"ContainerStarted","Data":"4112bdcd0263ade6937b54c3f9c062435ea72889ae58bce5f0d7d61d0c84e47d"} Jan 27 07:03:07 crc kubenswrapper[4729]: I0127 07:03:07.634595 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" event={"ID":"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c","Type":"ContainerStarted","Data":"55312c61587027756ce149ebc6cb010de9697e5add67f10552eb02542cf0c16d"} Jan 27 07:03:07 crc kubenswrapper[4729]: I0127 07:03:07.672852 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" podStartSLOduration=2.8013403 podStartE2EDuration="34.672830389s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.280746886 +0000 UTC m=+920.347868149" lastFinishedPulling="2026-01-27 07:03:07.152236975 +0000 UTC m=+952.219358238" observedRunningTime="2026-01-27 07:03:07.656429119 +0000 UTC m=+952.723550392" watchObservedRunningTime="2026-01-27 07:03:07.672830389 +0000 UTC m=+952.739951652" Jan 27 07:03:07 crc kubenswrapper[4729]: I0127 07:03:07.674620 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" podStartSLOduration=3.061927561 podStartE2EDuration="34.674613436s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.294949297 +0000 UTC m=+920.362070560" lastFinishedPulling="2026-01-27 07:03:06.907635172 +0000 UTC m=+951.974756435" observedRunningTime="2026-01-27 07:03:07.669619458 +0000 UTC m=+952.736740741" watchObservedRunningTime="2026-01-27 07:03:07.674613436 +0000 UTC m=+952.741734709" Jan 27 07:03:08 crc kubenswrapper[4729]: I0127 07:03:08.644466 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" event={"ID":"746a8b7a-6b43-4265-8766-03f4d5682468","Type":"ContainerStarted","Data":"cf2607d9e1c6beb75a28cd4dfe9b1a460b5d79ee9e2d0a914f3a73fd24b40e94"} Jan 27 07:03:08 crc kubenswrapper[4729]: I0127 07:03:08.644967 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" Jan 27 07:03:10 crc kubenswrapper[4729]: I0127 07:03:10.657311 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" event={"ID":"2dddb667-9f1f-4fd1-a5be-c1eac664059e","Type":"ContainerStarted","Data":"a596b89b45ec100c477b2baf8a6cc0a668b94e5c8744e6c4711086aef844984f"} Jan 27 07:03:10 crc kubenswrapper[4729]: I0127 07:03:10.657866 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:03:10 crc kubenswrapper[4729]: I0127 07:03:10.658683 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" event={"ID":"411fadb3-87a6-4a23-b0a8-a6ba4f5e529c","Type":"ContainerStarted","Data":"ed45040e1a623b89b916e218fed88b4a7bf3690eab2af573d162a51372d9d504"} Jan 27 07:03:10 crc kubenswrapper[4729]: I0127 07:03:10.658805 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:03:10 crc kubenswrapper[4729]: I0127 07:03:10.681938 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" podStartSLOduration=34.814425885 podStartE2EDuration="37.681920992s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:03:07.296316129 +0000 UTC m=+952.363437392" lastFinishedPulling="2026-01-27 07:03:10.163811236 +0000 UTC m=+955.230932499" observedRunningTime="2026-01-27 07:03:10.681650983 +0000 UTC m=+955.748772266" watchObservedRunningTime="2026-01-27 07:03:10.681920992 +0000 UTC m=+955.749042255" Jan 27 07:03:10 crc kubenswrapper[4729]: I0127 07:03:10.685652 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" podStartSLOduration=5.25876443 podStartE2EDuration="37.685623558s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.463894209 +0000 UTC m=+920.531015462" lastFinishedPulling="2026-01-27 07:03:07.890753337 +0000 UTC m=+952.957874590" observedRunningTime="2026-01-27 07:03:08.661419068 +0000 UTC m=+953.728540331" watchObservedRunningTime="2026-01-27 07:03:10.685623558 +0000 UTC m=+955.752744821" Jan 27 07:03:10 crc kubenswrapper[4729]: I0127 07:03:10.707316 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" podStartSLOduration=34.174983788 podStartE2EDuration="37.707295757s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:03:06.61616893 +0000 UTC m=+951.683290193" lastFinishedPulling="2026-01-27 07:03:10.148480899 +0000 UTC m=+955.215602162" observedRunningTime="2026-01-27 07:03:10.700872603 +0000 UTC m=+955.767993876" watchObservedRunningTime="2026-01-27 07:03:10.707295757 +0000 UTC m=+955.774417030" Jan 27 07:03:12 crc kubenswrapper[4729]: I0127 07:03:12.674008 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" event={"ID":"3413118b-46c0-420a-b381-b2fb3c3e7d5f","Type":"ContainerStarted","Data":"ffe71f4030f89411619a795005b66d789d9ad711c3a248f665a1ea51da31599f"} Jan 27 07:03:12 crc kubenswrapper[4729]: I0127 07:03:12.674546 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" Jan 27 07:03:12 crc kubenswrapper[4729]: I0127 07:03:12.698820 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" podStartSLOduration=3.1754842659999998 podStartE2EDuration="39.698802899s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.313587498 +0000 UTC m=+920.380708761" lastFinishedPulling="2026-01-27 07:03:11.836906131 +0000 UTC m=+956.904027394" observedRunningTime="2026-01-27 07:03:12.694144181 +0000 UTC m=+957.761265454" watchObservedRunningTime="2026-01-27 07:03:12.698802899 +0000 UTC m=+957.765924172" Jan 27 07:03:13 crc kubenswrapper[4729]: I0127 07:03:13.687103 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" event={"ID":"b7167efa-455e-473c-9a89-d9a1b55f6523","Type":"ContainerStarted","Data":"77619b376113dd7c892466660071303355d295977879cce8aec024507d656987"} Jan 27 07:03:13 crc kubenswrapper[4729]: I0127 07:03:13.688776 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" Jan 27 07:03:13 crc kubenswrapper[4729]: I0127 07:03:13.708296 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" podStartSLOduration=3.230060807 podStartE2EDuration="40.70826942s" podCreationTimestamp="2026-01-27 07:02:33 +0000 UTC" firstStartedPulling="2026-01-27 07:02:35.313165494 +0000 UTC m=+920.380286757" lastFinishedPulling="2026-01-27 07:03:12.791374097 +0000 UTC m=+957.858495370" observedRunningTime="2026-01-27 07:03:13.707423833 +0000 UTC m=+958.774545136" watchObservedRunningTime="2026-01-27 07:03:13.70826942 +0000 UTC m=+958.775390693" Jan 27 07:03:13 crc kubenswrapper[4729]: I0127 07:03:13.726197 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-t6jms" Jan 27 07:03:13 crc kubenswrapper[4729]: I0127 07:03:13.773815 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-5drlv" Jan 27 07:03:13 crc kubenswrapper[4729]: I0127 07:03:13.807300 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vtpvf" Jan 27 07:03:13 crc kubenswrapper[4729]: I0127 07:03:13.851612 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-ggjqc" Jan 27 07:03:13 crc kubenswrapper[4729]: I0127 07:03:13.999580 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2mstg" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.168808 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-ltqnm" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.189439 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.230109 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-z8hhj" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.256617 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-dv45b" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.278512 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-sl5ww" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.309887 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-22f24" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.487671 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bkkbb" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.639211 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-24xxm" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.644780 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-mphcb" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.688896 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cqxx6" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.803582 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-qdzzc" Jan 27 07:03:14 crc kubenswrapper[4729]: I0127 07:03:14.867046 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-ngvsf" Jan 27 07:03:15 crc kubenswrapper[4729]: I0127 07:03:15.737276 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-fbmgs" Jan 27 07:03:16 crc kubenswrapper[4729]: I0127 07:03:16.096862 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7" Jan 27 07:03:20 crc kubenswrapper[4729]: I0127 07:03:20.562736 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-qd5rn" Jan 27 07:03:24 crc kubenswrapper[4729]: I0127 07:03:24.143408 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kl8s6" Jan 27 07:03:24 crc kubenswrapper[4729]: I0127 07:03:24.208606 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lhb4p" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.486699 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tr7bt/must-gather-bnzsp"] Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.490411 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7bt/must-gather-bnzsp" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.495821 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tr7bt"/"kube-root-ca.crt" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.496205 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tr7bt"/"openshift-service-ca.crt" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.496428 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tr7bt"/"default-dockercfg-65bj7" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.518472 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tr7bt/must-gather-bnzsp"] Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.574279 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn62b\" (UniqueName: \"kubernetes.io/projected/d5e1b674-a452-4d9b-8b85-f178402ea517-kube-api-access-bn62b\") pod \"must-gather-bnzsp\" (UID: \"d5e1b674-a452-4d9b-8b85-f178402ea517\") " pod="openshift-must-gather-tr7bt/must-gather-bnzsp" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.574321 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5e1b674-a452-4d9b-8b85-f178402ea517-must-gather-output\") pod \"must-gather-bnzsp\" (UID: \"d5e1b674-a452-4d9b-8b85-f178402ea517\") " pod="openshift-must-gather-tr7bt/must-gather-bnzsp" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.675294 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn62b\" (UniqueName: \"kubernetes.io/projected/d5e1b674-a452-4d9b-8b85-f178402ea517-kube-api-access-bn62b\") pod \"must-gather-bnzsp\" (UID: \"d5e1b674-a452-4d9b-8b85-f178402ea517\") " pod="openshift-must-gather-tr7bt/must-gather-bnzsp" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.675329 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5e1b674-a452-4d9b-8b85-f178402ea517-must-gather-output\") pod \"must-gather-bnzsp\" (UID: \"d5e1b674-a452-4d9b-8b85-f178402ea517\") " pod="openshift-must-gather-tr7bt/must-gather-bnzsp" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.675689 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5e1b674-a452-4d9b-8b85-f178402ea517-must-gather-output\") pod \"must-gather-bnzsp\" (UID: \"d5e1b674-a452-4d9b-8b85-f178402ea517\") " pod="openshift-must-gather-tr7bt/must-gather-bnzsp" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.696229 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn62b\" (UniqueName: \"kubernetes.io/projected/d5e1b674-a452-4d9b-8b85-f178402ea517-kube-api-access-bn62b\") pod \"must-gather-bnzsp\" (UID: \"d5e1b674-a452-4d9b-8b85-f178402ea517\") " pod="openshift-must-gather-tr7bt/must-gather-bnzsp" Jan 27 07:04:04 crc kubenswrapper[4729]: I0127 07:04:04.816284 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7bt/must-gather-bnzsp" Jan 27 07:04:05 crc kubenswrapper[4729]: W0127 07:04:05.405948 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e1b674_a452_4d9b_8b85_f178402ea517.slice/crio-270e2942907df040f2f7a0131de5bf5e006469a9f3eba9a0203ff16b077b7474 WatchSource:0}: Error finding container 270e2942907df040f2f7a0131de5bf5e006469a9f3eba9a0203ff16b077b7474: Status 404 returned error can't find the container with id 270e2942907df040f2f7a0131de5bf5e006469a9f3eba9a0203ff16b077b7474 Jan 27 07:04:05 crc kubenswrapper[4729]: I0127 07:04:05.415499 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:04:05 crc kubenswrapper[4729]: I0127 07:04:05.419714 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tr7bt/must-gather-bnzsp"] Jan 27 07:04:06 crc kubenswrapper[4729]: I0127 07:04:06.111303 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7bt/must-gather-bnzsp" event={"ID":"d5e1b674-a452-4d9b-8b85-f178402ea517","Type":"ContainerStarted","Data":"270e2942907df040f2f7a0131de5bf5e006469a9f3eba9a0203ff16b077b7474"} Jan 27 07:04:12 crc kubenswrapper[4729]: I0127 07:04:12.156106 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7bt/must-gather-bnzsp" event={"ID":"d5e1b674-a452-4d9b-8b85-f178402ea517","Type":"ContainerStarted","Data":"7f4b6669b4b38f88071237eb0fd2a02675413a257171fefe3e9ac72e69d8912f"} Jan 27 07:04:13 crc kubenswrapper[4729]: I0127 07:04:13.164676 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7bt/must-gather-bnzsp" event={"ID":"d5e1b674-a452-4d9b-8b85-f178402ea517","Type":"ContainerStarted","Data":"5495b29ccdce933a34c3ac2cd3abb489088aa0f0df71012e89d3698bc4e232e9"} Jan 27 07:04:13 crc kubenswrapper[4729]: I0127 07:04:13.178162 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tr7bt/must-gather-bnzsp" podStartSLOduration=2.824674695 podStartE2EDuration="9.178141518s" podCreationTimestamp="2026-01-27 07:04:04 +0000 UTC" firstStartedPulling="2026-01-27 07:04:05.408843483 +0000 UTC m=+1010.475964766" lastFinishedPulling="2026-01-27 07:04:11.762310316 +0000 UTC m=+1016.829431589" observedRunningTime="2026-01-27 07:04:13.175613679 +0000 UTC m=+1018.242734952" watchObservedRunningTime="2026-01-27 07:04:13.178141518 +0000 UTC m=+1018.245262811" Jan 27 07:05:14 crc kubenswrapper[4729]: I0127 07:05:14.417933 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c_418934a2-192f-4722-a381-111040d505b7/util/0.log" Jan 27 07:05:14 crc kubenswrapper[4729]: I0127 07:05:14.592825 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c_418934a2-192f-4722-a381-111040d505b7/util/0.log" Jan 27 07:05:14 crc kubenswrapper[4729]: I0127 07:05:14.670515 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c_418934a2-192f-4722-a381-111040d505b7/pull/0.log" Jan 27 07:05:14 crc kubenswrapper[4729]: I0127 07:05:14.681530 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c_418934a2-192f-4722-a381-111040d505b7/pull/0.log" Jan 27 07:05:14 crc kubenswrapper[4729]: I0127 07:05:14.861191 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c_418934a2-192f-4722-a381-111040d505b7/util/0.log" Jan 27 07:05:14 crc kubenswrapper[4729]: I0127 07:05:14.888347 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c_418934a2-192f-4722-a381-111040d505b7/pull/0.log" Jan 27 07:05:14 crc kubenswrapper[4729]: I0127 07:05:14.928769 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e964672c_418934a2-192f-4722-a381-111040d505b7/extract/0.log" Jan 27 07:05:15 crc kubenswrapper[4729]: I0127 07:05:15.102089 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-2mstg_f8bab3ea-2957-43a7-894e-25c3d7b54287/manager/0.log" Jan 27 07:05:15 crc kubenswrapper[4729]: I0127 07:05:15.159575 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-t6jms_8fae3d38-94af-48f2-a91d-b465752c4d15/manager/0.log" Jan 27 07:05:15 crc kubenswrapper[4729]: I0127 07:05:15.317900 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-5drlv_841f8e61-a7f0-4cd6-9554-bbcd391a9431/manager/0.log" Jan 27 07:05:15 crc kubenswrapper[4729]: I0127 07:05:15.478280 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-vtpvf_68e763f0-9028-45f1-9a58-c19c5b0189ff/manager/0.log" Jan 27 07:05:15 crc kubenswrapper[4729]: I0127 07:05:15.531964 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-74866cc64d-ggjqc_5504ffe5-4dd8-4ee6-a3d1-cd609ef82770/manager/0.log" Jan 27 07:05:15 crc kubenswrapper[4729]: I0127 07:05:15.732199 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lhb4p_3413118b-46c0-420a-b381-b2fb3c3e7d5f/manager/0.log" Jan 27 07:05:15 crc kubenswrapper[4729]: I0127 07:05:15.848896 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-fbmgs_411fadb3-87a6-4a23-b0a8-a6ba4f5e529c/manager/0.log" Jan 27 07:05:15 crc kubenswrapper[4729]: I0127 07:05:15.945052 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-22f24_f0efecd6-a0c0-419c-a650-4e82d66a0080/manager/0.log" Jan 27 07:05:16 crc kubenswrapper[4729]: I0127 07:05:16.043961 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-kl8s6_b7167efa-455e-473c-9a89-d9a1b55f6523/manager/0.log" Jan 27 07:05:16 crc kubenswrapper[4729]: I0127 07:05:16.180035 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-ltqnm_7770e510-40b8-480e-a29b-f92bee72ccbb/manager/0.log" Jan 27 07:05:16 crc kubenswrapper[4729]: I0127 07:05:16.250273 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-4nmwf_2651401b-47d5-40f2-aaf2-316078ea8ab4/manager/0.log" Jan 27 07:05:16 crc kubenswrapper[4729]: I0127 07:05:16.441567 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-z8hhj_746a8b7a-6b43-4265-8766-03f4d5682468/manager/0.log" Jan 27 07:05:16 crc kubenswrapper[4729]: I0127 07:05:16.538297 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7f54b7d6d4-dv45b_1363c91a-c57e-47da-b442-1331d4a4e4c1/manager/0.log" Jan 27 07:05:16 crc kubenswrapper[4729]: I0127 07:05:16.738479 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-sl5ww_208d609e-4bd2-4e07-8247-c817513cd3f1/manager/0.log" Jan 27 07:05:17 crc kubenswrapper[4729]: I0127 07:05:17.009230 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854lwxl7_2dddb667-9f1f-4fd1-a5be-c1eac664059e/manager/0.log" Jan 27 07:05:17 crc kubenswrapper[4729]: I0127 07:05:17.238932 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-64d6b84b7b-qd5rn_3608e6ab-1167-45e0-a01a-48f0339c45e0/manager/0.log" Jan 27 07:05:17 crc kubenswrapper[4729]: I0127 07:05:17.388424 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5c58fc478-hdj28_1de6892f-5b4c-4722-be63-3e35853e6b20/operator/0.log" Jan 27 07:05:17 crc kubenswrapper[4729]: I0127 07:05:17.553148 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-57z7c_a6a81d4c-4fb0-4e4f-a09e-0540bbd2f702/registry-server/0.log" Jan 27 07:05:17 crc kubenswrapper[4729]: I0127 07:05:17.616022 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-cqxx6_9fafbb5d-f848-4dc6-9bb4-4101924b2ab2/manager/0.log" Jan 27 07:05:17 crc kubenswrapper[4729]: I0127 07:05:17.789888 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-bkkbb_59e3aaff-61c0-4878-bd07-8a83e07eda58/manager/0.log" Jan 27 07:05:17 crc kubenswrapper[4729]: I0127 07:05:17.954467 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mtbcm_3e370583-f827-4e73-a360-10ea26ad6c94/operator/0.log" Jan 27 07:05:17 crc kubenswrapper[4729]: I0127 07:05:17.999703 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-mphcb_aa0fd3d1-eec0-44e5-a28d-d009ae449ee0/manager/0.log" Jan 27 07:05:18 crc kubenswrapper[4729]: I0127 07:05:18.201252 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-24xxm_9b7c6228-376e-4dd1-a3bc-6d93e27cbf60/manager/0.log" Jan 27 07:05:18 crc kubenswrapper[4729]: I0127 07:05:18.270113 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-ngvsf_53cd31cd-4d1b-4d0c-8d48-1826850ed320/manager/0.log" Jan 27 07:05:18 crc kubenswrapper[4729]: I0127 07:05:18.384553 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75db85654f-qdzzc_d75de196-b27a-44e6-a707-210294d056f5/manager/0.log" Jan 27 07:05:31 crc kubenswrapper[4729]: I0127 07:05:31.087660 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:05:31 crc kubenswrapper[4729]: I0127 07:05:31.088385 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:05:40 crc kubenswrapper[4729]: I0127 07:05:40.646728 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cgdkp_18883629-496c-42b8-8fab-68b3daa9fcad/control-plane-machine-set-operator/0.log" Jan 27 07:05:40 crc kubenswrapper[4729]: I0127 07:05:40.798570 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-th46k_b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4/kube-rbac-proxy/0.log" Jan 27 07:05:40 crc kubenswrapper[4729]: I0127 07:05:40.900927 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-th46k_b776cb93-ec0d-4bda-b0fb-1b0fc4eed9c4/machine-api-operator/0.log" Jan 27 07:05:54 crc kubenswrapper[4729]: I0127 07:05:54.639126 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-w6c52_71eeaf2d-639f-494a-8b0a-230e4c800a72/cert-manager-controller/0.log" Jan 27 07:05:54 crc kubenswrapper[4729]: I0127 07:05:54.831013 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-hsjzv_7a77adf9-d502-450e-b5b7-8421a46b658c/cert-manager-cainjector/0.log" Jan 27 07:05:54 crc kubenswrapper[4729]: I0127 07:05:54.934674 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bp54v_7fbae4f4-f168-4142-9f19-6644887d053f/cert-manager-webhook/0.log" Jan 27 07:06:01 crc kubenswrapper[4729]: I0127 07:06:01.086863 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:06:01 crc kubenswrapper[4729]: I0127 07:06:01.087405 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:06:08 crc kubenswrapper[4729]: I0127 07:06:08.659690 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-wd76v_e2954439-230c-448d-bb20-d1a458a99432/nmstate-console-plugin/0.log" Jan 27 07:06:08 crc kubenswrapper[4729]: I0127 07:06:08.817386 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qgfkk_329d538c-94a1-4eec-bf1f-0a867d6f8db1/nmstate-handler/0.log" Jan 27 07:06:08 crc kubenswrapper[4729]: I0127 07:06:08.881581 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mpwfg_289f2f6a-405b-4f18-a09b-80f4ad0b4f32/kube-rbac-proxy/0.log" Jan 27 07:06:08 crc kubenswrapper[4729]: I0127 07:06:08.942425 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mpwfg_289f2f6a-405b-4f18-a09b-80f4ad0b4f32/nmstate-metrics/0.log" Jan 27 07:06:09 crc kubenswrapper[4729]: I0127 07:06:09.125506 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-94b9n_c1f989f4-8cae-498d-99ec-fcb530e3933a/nmstate-operator/0.log" Jan 27 07:06:09 crc kubenswrapper[4729]: I0127 07:06:09.146496 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-k6zqj_bf8073cf-8829-412d-a338-f198333488f2/nmstate-webhook/0.log" Jan 27 07:06:31 crc kubenswrapper[4729]: I0127 07:06:31.087793 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:06:31 crc kubenswrapper[4729]: I0127 07:06:31.088286 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:06:31 crc kubenswrapper[4729]: I0127 07:06:31.088332 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 07:06:31 crc kubenswrapper[4729]: I0127 07:06:31.088766 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf9c58cc07f8daa1a56682b1bdc8052b20a895a3b6aef5e32fcf24b19c7ab2a0"} pod="openshift-machine-config-operator/machine-config-daemon-5x25t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:06:31 crc kubenswrapper[4729]: I0127 07:06:31.088814 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" containerID="cri-o://cf9c58cc07f8daa1a56682b1bdc8052b20a895a3b6aef5e32fcf24b19c7ab2a0" gracePeriod=600 Jan 27 07:06:32 crc kubenswrapper[4729]: I0127 07:06:32.043622 4729 generic.go:334] "Generic (PLEG): container finished" podID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerID="cf9c58cc07f8daa1a56682b1bdc8052b20a895a3b6aef5e32fcf24b19c7ab2a0" exitCode=0 Jan 27 07:06:32 crc kubenswrapper[4729]: I0127 07:06:32.043715 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerDied","Data":"cf9c58cc07f8daa1a56682b1bdc8052b20a895a3b6aef5e32fcf24b19c7ab2a0"} Jan 27 07:06:32 crc kubenswrapper[4729]: I0127 07:06:32.044281 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerStarted","Data":"f18db5a1336ac02417ed34a395973f3ed0fe123116d4751321bc4700b8072886"} Jan 27 07:06:32 crc kubenswrapper[4729]: I0127 07:06:32.044310 4729 scope.go:117] "RemoveContainer" containerID="1461a2d53f2bcef42ef23fcff0f45288da459bca413167e7db07cd4d0b094c40" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.036593 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-drtzd_0dad915a-2911-45cf-9c6c-ca28066dfc55/kube-rbac-proxy/0.log" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.146901 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-drtzd_0dad915a-2911-45cf-9c6c-ca28066dfc55/controller/0.log" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.344607 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-frr-files/0.log" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.520671 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-reloader/0.log" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.539163 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-frr-files/0.log" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.561536 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-metrics/0.log" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.614207 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-reloader/0.log" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.817857 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-metrics/0.log" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.818194 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-metrics/0.log" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.824610 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-reloader/0.log" Jan 27 07:06:36 crc kubenswrapper[4729]: I0127 07:06:36.877138 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-frr-files/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.009288 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-metrics/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.018469 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-reloader/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.026971 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/cp-frr-files/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.105877 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/controller/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.270008 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/kube-rbac-proxy/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.304528 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/frr-metrics/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.320500 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/frr/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.329718 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/kube-rbac-proxy-frr/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.456050 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k7952_0e76713c-f4f5-4566-b8b9-3e125e997b1d/reloader/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.500107 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-gt2zd_cb874789-6f11-4e02-93ea-4db078896622/frr-k8s-webhook-server/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.671949 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-589bffb6f5-hrlgn_e30eab6b-769a-40f2-9dc0-6f2c54082eca/manager/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.793162 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-76b44fd978-464zw_2c4da10b-2833-4206-900f-205d963cc173/webhook-server/0.log" Jan 27 07:06:37 crc kubenswrapper[4729]: I0127 07:06:37.861195 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jg8br_603cd317-f05d-4596-9dc9-4a7c55b1f1f4/kube-rbac-proxy/0.log" Jan 27 07:06:38 crc kubenswrapper[4729]: I0127 07:06:38.055619 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jg8br_603cd317-f05d-4596-9dc9-4a7c55b1f1f4/speaker/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.093038 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj_6246b5cc-e7d5-4791-b188-7a4bb601ac73/util/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.299708 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj_6246b5cc-e7d5-4791-b188-7a4bb601ac73/pull/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.304630 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj_6246b5cc-e7d5-4791-b188-7a4bb601ac73/pull/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.474061 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj_6246b5cc-e7d5-4791-b188-7a4bb601ac73/pull/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.488712 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj_6246b5cc-e7d5-4791-b188-7a4bb601ac73/util/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.508146 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj_6246b5cc-e7d5-4791-b188-7a4bb601ac73/util/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.570965 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsjbj_6246b5cc-e7d5-4791-b188-7a4bb601ac73/extract/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.705385 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs_11342f63-e747-4052-a4e3-f38d99033488/util/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.887761 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs_11342f63-e747-4052-a4e3-f38d99033488/pull/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.897046 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs_11342f63-e747-4052-a4e3-f38d99033488/util/0.log" Jan 27 07:06:51 crc kubenswrapper[4729]: I0127 07:06:51.923188 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs_11342f63-e747-4052-a4e3-f38d99033488/pull/0.log" Jan 27 07:06:52 crc kubenswrapper[4729]: I0127 07:06:52.102198 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs_11342f63-e747-4052-a4e3-f38d99033488/pull/0.log" Jan 27 07:06:52 crc kubenswrapper[4729]: I0127 07:06:52.102896 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs_11342f63-e747-4052-a4e3-f38d99033488/extract/0.log" Jan 27 07:06:52 crc kubenswrapper[4729]: I0127 07:06:52.104775 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713b8bcs_11342f63-e747-4052-a4e3-f38d99033488/util/0.log" Jan 27 07:06:52 crc kubenswrapper[4729]: I0127 07:06:52.291325 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n59rz_b30c6af0-ba20-4257-a0cd-561fca708a60/extract-utilities/0.log" Jan 27 07:06:52 crc kubenswrapper[4729]: I0127 07:06:52.477890 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n59rz_b30c6af0-ba20-4257-a0cd-561fca708a60/extract-content/0.log" Jan 27 07:06:52 crc kubenswrapper[4729]: I0127 07:06:52.542472 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n59rz_b30c6af0-ba20-4257-a0cd-561fca708a60/extract-utilities/0.log" Jan 27 07:06:52 crc kubenswrapper[4729]: I0127 07:06:52.554815 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n59rz_b30c6af0-ba20-4257-a0cd-561fca708a60/extract-content/0.log" Jan 27 07:06:52 crc kubenswrapper[4729]: I0127 07:06:52.824947 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n59rz_b30c6af0-ba20-4257-a0cd-561fca708a60/extract-content/0.log" Jan 27 07:06:52 crc kubenswrapper[4729]: I0127 07:06:52.875896 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n59rz_b30c6af0-ba20-4257-a0cd-561fca708a60/extract-utilities/0.log" Jan 27 07:06:52 crc kubenswrapper[4729]: I0127 07:06:52.897374 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n59rz_b30c6af0-ba20-4257-a0cd-561fca708a60/registry-server/0.log" Jan 27 07:06:53 crc kubenswrapper[4729]: I0127 07:06:53.018986 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgwph_d748891f-d6ea-4e31-8e19-c41fe08949ab/extract-utilities/0.log" Jan 27 07:06:53 crc kubenswrapper[4729]: I0127 07:06:53.201822 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgwph_d748891f-d6ea-4e31-8e19-c41fe08949ab/extract-content/0.log" Jan 27 07:06:53 crc kubenswrapper[4729]: I0127 07:06:53.215835 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgwph_d748891f-d6ea-4e31-8e19-c41fe08949ab/extract-content/0.log" Jan 27 07:06:53 crc kubenswrapper[4729]: I0127 07:06:53.223630 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgwph_d748891f-d6ea-4e31-8e19-c41fe08949ab/extract-utilities/0.log" Jan 27 07:06:53 crc kubenswrapper[4729]: I0127 07:06:53.472143 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgwph_d748891f-d6ea-4e31-8e19-c41fe08949ab/extract-utilities/0.log" Jan 27 07:06:53 crc kubenswrapper[4729]: I0127 07:06:53.529690 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgwph_d748891f-d6ea-4e31-8e19-c41fe08949ab/extract-content/0.log" Jan 27 07:06:53 crc kubenswrapper[4729]: I0127 07:06:53.626222 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgwph_d748891f-d6ea-4e31-8e19-c41fe08949ab/registry-server/0.log" Jan 27 07:06:53 crc kubenswrapper[4729]: I0127 07:06:53.713340 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-l5hnb_6c8ac9fd-a774-447b-a382-e09ff41f678b/marketplace-operator/0.log" Jan 27 07:06:53 crc kubenswrapper[4729]: I0127 07:06:53.855517 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2wb4g_46d40a19-dcbc-4606-b3f6-cda045b22c35/extract-utilities/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.026128 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2wb4g_46d40a19-dcbc-4606-b3f6-cda045b22c35/extract-content/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.033438 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2wb4g_46d40a19-dcbc-4606-b3f6-cda045b22c35/extract-content/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.076943 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2wb4g_46d40a19-dcbc-4606-b3f6-cda045b22c35/extract-utilities/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.204730 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2wb4g_46d40a19-dcbc-4606-b3f6-cda045b22c35/extract-utilities/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.245714 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2wb4g_46d40a19-dcbc-4606-b3f6-cda045b22c35/extract-content/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.360647 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2wb4g_46d40a19-dcbc-4606-b3f6-cda045b22c35/registry-server/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.404620 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwxfg_967e206f-9b9e-4691-9041-6b91c6732721/extract-utilities/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.642124 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwxfg_967e206f-9b9e-4691-9041-6b91c6732721/extract-content/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.650941 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwxfg_967e206f-9b9e-4691-9041-6b91c6732721/extract-content/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.655029 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwxfg_967e206f-9b9e-4691-9041-6b91c6732721/extract-utilities/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.874647 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwxfg_967e206f-9b9e-4691-9041-6b91c6732721/extract-utilities/0.log" Jan 27 07:06:54 crc kubenswrapper[4729]: I0127 07:06:54.972205 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwxfg_967e206f-9b9e-4691-9041-6b91c6732721/extract-content/0.log" Jan 27 07:06:55 crc kubenswrapper[4729]: I0127 07:06:55.211491 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwxfg_967e206f-9b9e-4691-9041-6b91c6732721/registry-server/0.log" Jan 27 07:08:02 crc kubenswrapper[4729]: I0127 07:08:02.223518 4729 generic.go:334] "Generic (PLEG): container finished" podID="d5e1b674-a452-4d9b-8b85-f178402ea517" containerID="7f4b6669b4b38f88071237eb0fd2a02675413a257171fefe3e9ac72e69d8912f" exitCode=0 Jan 27 07:08:02 crc kubenswrapper[4729]: I0127 07:08:02.223814 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7bt/must-gather-bnzsp" event={"ID":"d5e1b674-a452-4d9b-8b85-f178402ea517","Type":"ContainerDied","Data":"7f4b6669b4b38f88071237eb0fd2a02675413a257171fefe3e9ac72e69d8912f"} Jan 27 07:08:02 crc kubenswrapper[4729]: I0127 07:08:02.224956 4729 scope.go:117] "RemoveContainer" containerID="7f4b6669b4b38f88071237eb0fd2a02675413a257171fefe3e9ac72e69d8912f" Jan 27 07:08:03 crc kubenswrapper[4729]: I0127 07:08:03.133111 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tr7bt_must-gather-bnzsp_d5e1b674-a452-4d9b-8b85-f178402ea517/gather/0.log" Jan 27 07:08:09 crc kubenswrapper[4729]: I0127 07:08:09.956586 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tr7bt/must-gather-bnzsp"] Jan 27 07:08:09 crc kubenswrapper[4729]: I0127 07:08:09.958261 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tr7bt/must-gather-bnzsp" podUID="d5e1b674-a452-4d9b-8b85-f178402ea517" containerName="copy" containerID="cri-o://5495b29ccdce933a34c3ac2cd3abb489088aa0f0df71012e89d3698bc4e232e9" gracePeriod=2 Jan 27 07:08:09 crc kubenswrapper[4729]: I0127 07:08:09.967736 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tr7bt/must-gather-bnzsp"] Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.291936 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tr7bt_must-gather-bnzsp_d5e1b674-a452-4d9b-8b85-f178402ea517/copy/0.log" Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.293874 4729 generic.go:334] "Generic (PLEG): container finished" podID="d5e1b674-a452-4d9b-8b85-f178402ea517" containerID="5495b29ccdce933a34c3ac2cd3abb489088aa0f0df71012e89d3698bc4e232e9" exitCode=143 Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.293915 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="270e2942907df040f2f7a0131de5bf5e006469a9f3eba9a0203ff16b077b7474" Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.355668 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tr7bt_must-gather-bnzsp_d5e1b674-a452-4d9b-8b85-f178402ea517/copy/0.log" Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.356139 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7bt/must-gather-bnzsp" Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.403758 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn62b\" (UniqueName: \"kubernetes.io/projected/d5e1b674-a452-4d9b-8b85-f178402ea517-kube-api-access-bn62b\") pod \"d5e1b674-a452-4d9b-8b85-f178402ea517\" (UID: \"d5e1b674-a452-4d9b-8b85-f178402ea517\") " Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.403952 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5e1b674-a452-4d9b-8b85-f178402ea517-must-gather-output\") pod \"d5e1b674-a452-4d9b-8b85-f178402ea517\" (UID: \"d5e1b674-a452-4d9b-8b85-f178402ea517\") " Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.410126 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e1b674-a452-4d9b-8b85-f178402ea517-kube-api-access-bn62b" (OuterVolumeSpecName: "kube-api-access-bn62b") pod "d5e1b674-a452-4d9b-8b85-f178402ea517" (UID: "d5e1b674-a452-4d9b-8b85-f178402ea517"). InnerVolumeSpecName "kube-api-access-bn62b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.487408 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5e1b674-a452-4d9b-8b85-f178402ea517-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d5e1b674-a452-4d9b-8b85-f178402ea517" (UID: "d5e1b674-a452-4d9b-8b85-f178402ea517"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.509355 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn62b\" (UniqueName: \"kubernetes.io/projected/d5e1b674-a452-4d9b-8b85-f178402ea517-kube-api-access-bn62b\") on node \"crc\" DevicePath \"\"" Jan 27 07:08:10 crc kubenswrapper[4729]: I0127 07:08:10.509640 4729 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5e1b674-a452-4d9b-8b85-f178402ea517-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 07:08:11 crc kubenswrapper[4729]: I0127 07:08:11.299527 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7bt/must-gather-bnzsp" Jan 27 07:08:12 crc kubenswrapper[4729]: I0127 07:08:12.370445 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e1b674-a452-4d9b-8b85-f178402ea517" path="/var/lib/kubelet/pods/d5e1b674-a452-4d9b-8b85-f178402ea517/volumes" Jan 27 07:08:31 crc kubenswrapper[4729]: I0127 07:08:31.178780 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:08:31 crc kubenswrapper[4729]: I0127 07:08:31.179332 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:09:01 crc kubenswrapper[4729]: I0127 07:09:01.087013 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:09:01 crc kubenswrapper[4729]: I0127 07:09:01.087718 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:09:31 crc kubenswrapper[4729]: I0127 07:09:31.086732 4729 patch_prober.go:28] interesting pod/machine-config-daemon-5x25t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:09:31 crc kubenswrapper[4729]: I0127 07:09:31.088516 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:09:31 crc kubenswrapper[4729]: I0127 07:09:31.088661 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" Jan 27 07:09:31 crc kubenswrapper[4729]: I0127 07:09:31.089392 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f18db5a1336ac02417ed34a395973f3ed0fe123116d4751321bc4700b8072886"} pod="openshift-machine-config-operator/machine-config-daemon-5x25t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:09:31 crc kubenswrapper[4729]: I0127 07:09:31.089557 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" podUID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerName="machine-config-daemon" containerID="cri-o://f18db5a1336ac02417ed34a395973f3ed0fe123116d4751321bc4700b8072886" gracePeriod=600 Jan 27 07:09:31 crc kubenswrapper[4729]: I0127 07:09:31.846061 4729 generic.go:334] "Generic (PLEG): container finished" podID="526865eb-4ab7-486d-925d-6b4583d6b86f" containerID="f18db5a1336ac02417ed34a395973f3ed0fe123116d4751321bc4700b8072886" exitCode=0 Jan 27 07:09:31 crc kubenswrapper[4729]: I0127 07:09:31.846256 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerDied","Data":"f18db5a1336ac02417ed34a395973f3ed0fe123116d4751321bc4700b8072886"} Jan 27 07:09:31 crc kubenswrapper[4729]: I0127 07:09:31.846368 4729 scope.go:117] "RemoveContainer" containerID="cf9c58cc07f8daa1a56682b1bdc8052b20a895a3b6aef5e32fcf24b19c7ab2a0" Jan 27 07:09:32 crc kubenswrapper[4729]: I0127 07:09:32.853687 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5x25t" event={"ID":"526865eb-4ab7-486d-925d-6b4583d6b86f","Type":"ContainerStarted","Data":"4910be57131ed521249a5a1f03189b42b18af618abf5d50a58412f0f562d77fd"} Jan 27 07:10:16 crc kubenswrapper[4729]: I0127 07:10:16.821948 4729 scope.go:117] "RemoveContainer" containerID="5495b29ccdce933a34c3ac2cd3abb489088aa0f0df71012e89d3698bc4e232e9" Jan 27 07:10:16 crc kubenswrapper[4729]: I0127 07:10:16.841112 4729 scope.go:117] "RemoveContainer" containerID="7f4b6669b4b38f88071237eb0fd2a02675413a257171fefe3e9ac72e69d8912f" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136062512024446 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136062513017364 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136057153016513 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136057153015463 5ustar corecore